A Case for Site-Centric Operational Metrics - Nxtbook Media

13 downloads 718 Views 26MB Size Report
Aug 1, 2012 ... “When I began working at a sponsor company, they requested that all CRAs get certified. I had heard that the ACRP exam was harder but was ...
THE GLO B A L VO ICE OF CLINIC A L R ESE A RCH PROFESSIO N A L S

Volume 26 | Issue 4

August 2012

Performance Metrics

in Clinical Trials ● Using Metrics to Improve Trial Management ● How to Display Metrics ● R&D Efficiency Using Metrics ● Metrics in Quality Systems ● Metrics in Medical Imaging ● Site-Centric Operational Metrics ● Predictive Analytics

Scan this barcode to gain exclusive access to the digital edition. Don’t have the app? Download it for free on your mobile device at www.getscanlife.com.

ACRP CERTIFICATIONS

THE TRUSTED MARK OF

EXCELLENCE IN CLINICAL RESEARCH SM

“When I began working at a sponsor company, they requested that all CRAs get certified. I had heard that the ACRP exam was harder but was more widely recognized, so that was the one I decided to take. Since that time, being Certified has helped me professionally in terms of having an edge when interviewing, being constantly sought after by recruiters, and staying informed of what is relevant.”

–Cheryl Cox, CCRA

®

The Value of Certification ACRP Certifications have become the industry standard for qualified clinical research professionals, worldwide. Achieve your CCRA® (Certified Clinical Research Associate), CCRC® (Certified Clinical Research Coordinator), or CPI® (Certified Physician Investigator) designation from the only organization in the world that allows professionals to attain Certification in the job function they actually perform.

Earn Your Certification Applications Due: August 14, 2012 Note: The $150 late fee is in effect.

The Academy of Clinical Research Professionals (The Academy) is an affiliate organization of the Association of Clinical Research Professionals.

*The CCRA® and CCRC® programs are accredited by the National Commission for Certifying Agencies (NCCA).

www.acrpnet.org/certification

Where

leading companies and

top talent

meet

Valesta has a proven track record of successfully matching skilled professionals with leading companies to form efficient and cost-effective clinical resource solutions. By partnering with us, you tap into the expertise and resources of a company that’s been in the staffing business since 1985.

At Valesta, we put People First. Our mission is to help organizations thrive and people build rewarding careers by putting highly skilled professionals to work exactly when and where they are needed. From functional outsourcing to direct hire, and short- and long-term contract staffing, Valesta offers a full range of solutions for our clients and excellent opportunities for clinical research professionals. Top talent is placed in specialty areas, including clinical data, clinical monitoring, medical writing, biometrics, and regulatory affairs.

Call 866.445.2465 or visit valesta.com On Assignment Corporate Headquarters; 26745 Malibu Hills Road, Calabasas, CA 91301 On Assignment is an Equal Opportunity Employer, M/F/D/V.

A u g u s t 2 0 1 2   •   V o l u m e 2 6 , I s s u e 4   •   I SS N 1 0 8 8 - 2 1 1 1

C H A I R ’ S MESS A GE

THE GLO B A L VO ICE OF CLINIC A L R ESE A RCH PROFESSIO N A L S

Volume 26 | Issue 4

August 2012

 5 | Principle Four: Humans are Biased by “Common Sense” Clara H. Heering, MSc, MSc Chair, Association Board of Trustees

GUEST E D I TO R S ’ m e s s a g e

Performance Metrics

in Clinical Trials

 7 | Performance Metrics in Clinical Trials: Putting the Promise into Practice Linda B. Sullivan, MBA | Liz Wool, RN, BSN, CCRA, CMT

● Using Metrics to Improve Trial Management ● How to Display Metrics ● R&D Efficiency Using Metrics ● Metrics in Quality Systems ● Metrics in Medical Imaging ● Site-Centric Operational Metrics ● Predictive Analytics

Scan this barcode to gain exclusive access to the digital edition. Don’t have the app? Download it for free on your mobile device at www.getscanlife.com.

Like piles of paperwork that were generated but never analyzed, many institutions involved in clinical research used to go through the effort and expense of collecting performance metrics, but not using them. This situation is improving, however, and the articles in this issue of The Monitor address a variety of aspects that need to be considered when developing a performance metrics program. They provide a “how-to” road map and examples of how some organizations are using performance metrics to achieve the type of process improvements that the industry needs in today’s environment.

P EE R R E V I E W E D A R T I C LES

Performance Metrics in Clinical Trials  9 | Using Metrics to Direct Performance Improvement Efforts in Clinical Trial Management Keith Dorricott, BSc

15 | Clinical Metrics 102: Best Practices for the Visualization of Clinical Performance Metrics Paul Hake, BEng, ACA

23 | What Gets Measured Gets Fixed: Using Metrics to Make Continuous Progress in R&D Efficiency David S. Zuckerman, MS

29 | Intertwining Quality Management Systems with Metrics to Improve Trial Quality

Liz Wool, RN, BSN, CCRA, CMT

36 | Metrics in Medical Imaging: Changing the Picture Hui Jing Yu, PhD | Colin G. Miller, PhD | Dawn Flitcraft

Earn 3.0 Credits in this issue of The Monitor! See Home Study, page 86

41 | A Case for Site-Centric Operational Metrics Henry J. Durivage, PharmD | Srini Kalluri, BS

45 | Predictive Analytics: A Nonstatistical Perspective as Related to Executing Effective Clinical Trials April Davis, MS

The views, research methods, and conclusions expressed in articles published in The Monitor are those of the individual author(s) and not necessarily those of ACRP.

Other Issues in Clinical Research

© Copyright 2012 Association of Clinical Research Professionals. All rights reserved. For permission to photocopy or use material published herein, contact www.copyright.com.

53 | CRC Primer: Tips for Achieving Operational Excellence



2    Monitor August 2012

51 | Study Withdrawals: Follow the Reason to Find the Solution Carmen R. Gonzalez, JD

Wendy Boone, RN, MPH, CCRC, CCRA | Jennifer Zimmerer, MS, RD, CCRP | Kimberly Kreller, RN, BSN

About Barnett International 'PVOEFEJO #BSOFUU*OUFSOBUJPOBMJTXJEFMZSFDPHOJ[FEGPSJUTTVQFSJPSDPOTVMUJOHTFSWJDFTBOEJUTUBSHFUFEFEVDBUJPO BOEUSBJOJOHQSPHSBNT#BSOFUUIFMQTDMJFOUTHFUUIFNPTUPVUPGUIFJSSFTFBSDIBOEEFWFMPQNFOUEPMMBSTCZNBOBHJOHDIBOHF FGGFDUJWFMZ JNQSPWJOHPSHBOJ[BUJPOBMQFSGPSNBODF BOEFOIBODJOHTUBGGLOPXMFEHF5IF#BSOFUUBQQSPBDIJTBVOJRVFDPNCJOBUJPO PGTUSBUFHZEFWFMPQNFOUBOEQSBDUJDBM IBOETPOJNQMFNFOUBUJPO5IFi#BSOFUU%JGGFSFODFwJTFWJEFOUJOPVSEFFQVOEFSTUBOEJOHPG UIFDMJOJDBMSFTFBSDIQSPDFTTBOEJOUIFSBQJEBOEUBOHJCMFQFSGPSNBODFJNQSPWFNFOUTXFEFMJWFS

Upcoming courses include: t t t t t

"VEJUJOH5FDIOJRVFTGPS$MJOJDBM3FTFBSDI1SPGFTTJPOBMT $PNQSFIFOTJWF.POJUPSJOHGPS.FEJDBM%FWJDFT .POJUPSJOH$MJOJDBM%SVH4UVEJFT#FHJOOFS 1IBSNBDPWJHJMBODF"VEJU 3PPU$BVTF"OBMZTJT$PSSFDUJWFBOE1SFWFOUJWF "DUJPOJO4JUF.BOBHFNFOU t (PPE$MJOJDBM1SBDUJDF1SBDUJDBM"QQMJDBUJPO BOE*NQMFNFOUBUJPO t 4USBUFHJFTGPS.BOBHJOH%JGmDVMU$MJOJDBM 3FTFBSDI4JUFT t 5SJBM.BTUFS'JMF 5.' GPS4QPOTPST4FU6Q BOE.BJOUFOBODF 8FIFMQQIBSNBDFVUJDBM CJPUFDIOPMPHZBOENFEJDBM EFWJDFDPNQBOJFTNBYJNJ[FUIFTQFFEBOERVBMJUZPG UIFJSQSPEVDUEFWFMPQNFOUFGGPSUT8JUIBOJOUFSOBUJPOBM QSFTFODFBOENBOZTUBGGNFNCFSTXPSLJOHGSPNUIFDMJFOU TJUFGPSJODSFBTFECFOFmUT #BSOFUUJTBCMFUPBQQMZBHMPCBM FYQFSUJTFUPJUTQSPKFDUT 0VSTFSWJDFTJODMVEFFEVDBUJPOBMQSPHSBNTBOEQSPEVDUTBT XFMMBTDPOTVMUJOHTFSWJDFT

The source of our expertise: t &YDMVTJWFMZGPDVTFEPOUIFQIBSNBDFVUJDBM CJPQIBSNBDFVUJDBMJOEVTUSZGPSNPSFUIBOZFBST t "UFBNPGUBMFOUFEQSPGFTTJPOBMTXJUIEFFQSPPUFE JOEVTUSZJOTJHIUBOEFYQFSUJTF t "OJOEFQUIVOEFSTUBOEJOHPGEJGGFSFOUDVMUVSFTBOE VOJRVFPSHBOJ[BUJPOT t &YQFSJFODFXPSLJOHXJUINBKPSQIBSNBDFVUJDBM DPNQBOJFTBSPVOEUIFXPSME

BarnettInternational.com "EJWJTJPOPG$BNCSJEHF)FBMUIUFDI*OTUJUVUFt'JSTU"WF 4VJUF/FFEIBN ."64"t1IPOF  

ColumnS CRA CENTRAL

57 | Modernizing Monitoring: The Case for Risk-Based Monitoring Suzanne Heske, RPh, MS, CCRA, BCNP DATA-TECH CONNECT

Editor-in-Chief A. Veronica Precup [email protected] (703) 254-8100 Associate Editor Gary W. Cramer

EDITORIAL ADVISORY BOARD Chair Iris Gorter de Vries, PhD Consultant Vice Chair Erika J. Stevens, MA Ernst & Young, LLP Dawn Carpenter, BS, MHsc, CCRC Nebraska Heart Institute/ Nebraska Specialty Network Norbert Clemens, MD, PhD CRS Mannheim GmbH Amy Leigh Davis, DBA, MBA Mercy Hospital and Medical Center Marie Fleisner, CMA, CUT Marshfield Clinic Research Foundation Beth Harper, MBA Clinical Performance Partners Dana Keane, BS, CCRA, CCRP Optos Vicky Parikh, MD, MPH Mid-Atlantic Medical Research Centers Theresa Straut, BA, CIP, RAC U.S. Department of Veterans Affairs Liz Wool, RN, BSN, CCRA, CMT QD-Quality and Training Solutions, Inc. Franeli Yadao, MSc, BA, CCRA Cangene Corporation

ADVertising Sabrina Sheth The Townsend Group (301) 215-6710 ext. 104 [email protected] Derek Wenzell The Townsend Group (301) 215-6710 ext. 131 [email protected] For membership questions, contact ACRP at [email protected] or (703) 254-8100.



4    Monitor August 2012

59 | Making Virtual Teams Work Kirk Mousley, MSEE, PhD OFF THE WIRE

61 | Form and Function in the News Gary W. Cramer

OPERATING ASSUMPTIONS

63 | First, Kill All the Lawyers Ronald S. Waife

QA Q&A CORNER

65 | CRO Conundrums and Access to Electronic Systems Terri P. Kelly, RN, MSQA, CCRA, CQA RESEARCH COMPLIANCE

67 | Risk-Based Integrated Quality Management and ISO 9001 Brent Ibata, PhD, JD, MPH, RAC, CCRC

A ss o c i at i o n N e w s CERTIFICATION

69 | 2011 Academy Examination Annual Report Morgean Hirt, ACA

71 | ACRP Certifies 599 Clinical Research Professionals CHAPTERS

76 | ACRP Chapter Listing 78 | Chapter Notes

A P C R NEW S a n d c o l u m n s 80 | APCR Board of Trustees & Organizational Listing APCR PRESIDENT’S MESSAGE

81 | Subject Protections or Social Contract? Michael J. Koren, MD, FACC, CPI PI CORNER

84 | Finding Studies and Recruiting Patients: Is There Synergy? Joel S. Ross, MD, FACP, AGSF, CMD, CPI, LLC

D e par t m e n t s 86 | Home Study: Performance Metrics in Clinical Trials 90 | ACRP Board of Trustees, Committees, and Staff 92 | ACRP/APCR Uniform Code of Ethics and Professional Conduct 94 | Monitor Article Submission Guidelines 95 | Index of Advertisers 96 | Calendar of Events

Clara H. Heering, MSc, MSc

C H A I R ’ s

CHAIR’s Message

M e s s a g e

Principle Four

Humans are Biased by “Common Sense”

T By gaining insights into and applying behavioral sciences, we can open the door to significant reduction of errors and accrual of cost savings in clinical trials.

he U.S. Food and Drug Administration and the European Medicines Agency have opened the door for clinical researchers to establish a new and improved system for developing innovative treatments with their recent papers on risk-based approaches to monitoring. The hope is that we will be able to enhance quality and develop more efficient paths to new treatments. In an ideal world, we should have a scientific assessment that leads to a sound protocol and investigator brochure, an efficient independent review of the proposed study from an ethical point of view, and an investigator team that is fully compliant with all processes and procedures as described. However, because we are imperfect humans, we have made mistakes, have not always been frank about them, and still dream about better treatments that are delivered with excellent evidence (see my previous three editorials). In order to address these human traits, we have, over the course of decades, arrived at a drug development system that mimics a Tayloristic engineering approach, with a step-by-step plan to process people, data, and quality control at most delivery points— namely, a system governed by the tenets of good clinical practice (GCP). Despite the intended rigor of this plan, audits reveal the same mistakes, year after year. If airlines had used a similar approach to their equivalent to GCP as we did in the last 20 years, and thus had

the same audit results year after year, it is highly likely there would have been far more crashes, casualties, and fear of flying. So what did the airlines do differently than we did with our GCP? They organized their whole system so as to avoid errors; they learned from psychology, human behavior, and biological sciences; and they built their practices around this knowledge. Can we do the same?

What Can the Behavioral Sciences Tell Us? My contention and intent is that, by gaining insights into and applying these behavioral sciences, we also can significantly change the way we plan, set up, conduct, and monitor our clinical trials, thereby opening the door to significant reduction of errors and accrual of cost savings. Today, I will focus on bias. Nobel Laureate Daniel Kahneman described key human characteristics with his colleague Amos Tversky (who unfortunately had died before the Nobel prize was awarded, and the prize is not awarded posthumously). In their seminal paper on “Judgment under uncertainty: heuristics and biases,”1 these authors describe three ways in which we are biased through our use of heuristics in judging probabilities and predicting values. Heuristics can be defined as the art of experience-based problem solving, learning, and discovery. Perhaps

x

Chair’s Message    5

when there is just too much information to process, we tend to use heuristics, which in lay language are often referred to as “common sense,” “educated guesses,” or “rules of thumb.” However, when we use common sense,

than with his or her clinical research practice, and who does not necessarily keep good oversight. From experience, you may know that the description that would impress a junior CRA fits a possible “illusion of validity”;

Perhaps when there is just too much information to process, we tend to use heuristics, which in lay language are often referred to as “common sense,” “educated guesses,” or “rules of thumb.” However, when we use common sense, we tend to be biased. we tend to be biased. Below, I have taken some ideas from Kahneman and Tversky to help frame the points I wish to make.

Three Biases to Beware of The first description concerns the bias of representativeness. Let’s take an example: Being “qualified” for clinical research, as stated in GCP. Let’s imagine you are a young clinical research associate (CRA) and you have found an investigator with “excellent” qualifications. The file shows that this investigator has participated in multiple clinical trials in the last 15 years, has certificates recognizing participation in 30 GCP courses, has chaired many international conference sessions, and is a true worldwide key opinion leader in his or her disease area. Your junior CRA “common sense” leads you to believe that these excellent signs of qualification will result in a highly compliant site. If you are an experienced CRA or project manager, on the other hand, you may build a very different “common sense” assessment of this investigator. You may guesstimate that this description does not necessarily represent a highly compliant site, but could actually disguise a site with many problems. For instance, a key opinion leader might be someone who spends more time at conferences

x

6    Monitor August 2012

in fact, the investigator may only be as good as his or her clinical research coordinator (CRC). The second description concerns the bias of availability. Some examples: due to retrievability of instances: A CRA may be more vigilant about verifying calibration of tools if a lack of calibration was a critical finding in one of his or her recent audits. Also, in general, the probability of higher frequency of an event will make it more present in a person’s mind; so, if a CRC discovered issues with investigational product accountability in the last three patients he or she encountered, remembering to verify this with all patients becomes easier. ●● Biases due to the effectiveness of a search set: For example, it is easier to identify errors in informed consent documents (unique, discrete pages of paper) than it is in large files devoted to the medical histories and eligibility criteria of longterm, chronically ill patients. ●● Biases due to “imaginability”: Junior clinical research professionals who have little experience may not be able to “imagine” the probabilities of various events and risks occurring in the conduct of the clinical trial they are just initiating. This may lead ●● Biases

to grossly overestimating or underestimating certain risks and lead to a failure to build appropriate oversight processes starting from the trial initiation. The third description concerns the bias of adjustment and anchoring. Evaluation of conjunctive and disjunctive events: Kahneman and Tversky demonstrate that the structure and sequence of events influence our estimation of risk. Overall, we are prone to underestimate the probability of (mal)functioning of our complex human body. The error comes from our human failure to correctly assess the differences in probability between “chain-like structure of conjunctions” and “funnel-like structure of disjunctions.” In other words, “even when the likelihood of failure in each [body] component is slight, the probability of an overall failure can be high if many components are involved.” Based on this knowledge, in order to assess patient safety, central monitoring should focus on the accumulation of adverse events in single subjects as well as focus on trends in distinct “harmful” adverse events.

In Conclusion Since the publication in 1974 of the paper referred to earlier, Kahneman has continued to build considerable further knowledge on human behavior.2 This knowledge should support us in focusing our time, effort, and valuable resources in order to pre-empt and avoid costly errors, and to address malfunctions with effective speed. In turn, this new paradigm should support enhanced patient safety and validity of data at lower cost.

References 1. Tversky A, Kahneman D. 1974. Judgment under uncertainty: heuristics and biases. Science 185(4157): 1124–31. 2. Kahneman D. 2011. Thinking Fast and Slow. New York: Farrar, Straus, and Giroux. 

Linda B. Sullivan, MBA | Liz Wool, RN, BSN, CCRA, CMT

G u e s t

Guest Editors’ Message

E d i t o R s ’

Performance Metrics in Clinical Trials

I Organizations are working to define what should be measured, and to develop the information technology infrastructure to collect, report, and analyze performance metrics to drive positive change in clinical trial processes.

n recent years, the clinical research enterprise has made strides in its efforts to use performance metrics to identify, analyze, and fix problems in clinical trial processes, with the aim of achieving improvements in critical performance measures, including efficiency, quality, and speed. In 2008, a survey of pharmaceutical and biotechnology organizations revealed that a total of 87% reported that the demand from their organizations for performance metrics was either “growing” or “rapidly growing.”1 Unfortunately, only 19% reported that they collected, reviewed, and effectively used performance metrics with their service providers. Nearly one-third of organizations reported that they collected and reviewed performance metrics, but did not take effective action with their service providers, and 16% reported that they collected performance metrics, but did not routinely review them. Why were organizations going through the effort and expense of collecting performance metrics, but not using them? These survey results illustrate the challenge that every organization faces with performance metrics; the process of defining, collecting, and using performance metrics in an effective manner is complex, and the industry has only begun to tackle it in earnest. During the four years since the survey was conducted, the industry has invested significant resources to gain a better understanding of how the complex process works. Organizations are

working to define what should be measured, and to develop the information technology infrastructure to collect, report, and analyze performance metrics to drive positive change in clinical trial processes. The articles in this issue of The Monitor address a variety of aspects that need to be considered when developing a performance metrics program. Further, they provide a “how-to” road map and examples of how organizations are using performance metrics to achieve the type of process improvements that the industry needs in today’s environment. In the lead article, Keith Dorricott uses examples to demonstrate the importance of measurement and the need to focus on the purpose of that measurement when defining performance metrics. Among other useful information, he highlights the philosophy of using metrics to measure process and not people performance. The aim of the article is to help readers gain a basic understanding of performance metrics before embarking into a formal metrics initiative for clinical trial management. In the next article, Paul Hake states that a well-designed clinical performance metrics system should communicate important performance information effectively. Improvements in performance metrics reporting systems have made it easier to create a large array of visual charts; however, some charts do a better job of providing insight than others, as Hake

x

Guest EditoRs’ Message    7

M e s s a g e

Putting the Promise into Practice

illustrates. Managers do not have the time to spend extracting information from poorly designed or inappropriate charts. Performance metric reports should highlight critical data and provide the user with the confidence to make decisions that will improve processes and deliver results. This article presents insights and best practices to help decision-makers select the right type of chart that best communicates each selected performance metric.

Managers do not have the time to spend extracting information from poorly designed or inappropriate charts. Performance metric reports should highlight critical data and provide the user with the confidence to make decisions that will improve processes and deliver results. Next up, David Zuckerman describes a tried-and-true approach for developing a performance metrics program for research and development or clinical operations organizations, and discusses the need to align metrics and incentives with an organization’s rhetoric and goals. He also describes how to create and implement a balanced metrics program to allow organizations to overcome such common problems as excessive workloads, rework, confusing or changing requirements, and disappointing outcomes in products and finances. Liz Wool takes a closer look at quality management systems and performance metrics in her article, which provides a targeted review of a quality system that provides organizations with the ability to define, plan, monitor, measure,

x

8    Monitor August 2012

and continuously improve the quality of their work, with the inherent ability to identify possible performance issues through the appropriate use of metrics. Her article provides examples of key performance indicator and key quality indicator metrics, and critical questions to ask when evaluating the performance of a quality management system. The next two articles present case studies describing how the use of performance metrics provided insight about performance challenges and the action steps organizations enacted to address the problems. Hui Jing Yu, Colin Miller, and Dawn Flitcraft describe how the use of imaging performance metrics to monitor image quality allowed appropriate levels of control for both an imaging core lab and sponsors, and thus enhanced trial performance and quality. Next, Henry Durivage and Srini Kalluri present three case studies from a collaboration of cancer centers to illustrate how site-centric operational metrics are more timely, take less effort to collect, are more actionable, and motivate the right behaviors. Furthermore, when sites work together to share their combined experience, aggregated site-centric metrics provide benchmarks for comparison between centers and an opportunity for collective learning. In the final article, April Davis explores the use of predictive analytics during clinical trials as a method of supporting effective trial management and presenting trends in trial performance. Her article provides foundational principles, and defines and discusses the prominence of predictive analytics and its value and practical use in the execution of clinical trials. The performance metrics principles described in these articles are universally applicable to biopharmaceutical and medical device organizations, contract research organizations, specialty core laboratories, and investigator sites. The time has come for the clinical research enterprise to embrace the use of time, quality, and cost performance metrics to drive needed improvements in clinical trials.

We hope that this issue will be the catalyst to unite all stakeholders on a journey of using performance metrics to achieve quality-driven, efficient clinical trials, and to champion the enterprise’s initiation of meaningful changes in the clinical trial process. We hope that this issue will be the catalyst to unite all stakeholders on a journey of using performance metrics to achieve quality-driven, efficient clinical trials, and to champion the enterprise’s initiation of meaningful changes in the clinical trial process. Let us also ensure that we are working together in a timely, cost-effective, and quality-driven manner to achieve our ultimate goal—bringing new therapies to patient populations throughout the world.

Reference 1. Metrics Champion Consortium. Using CRO Standardized Performance Metrics to Enhance Partnership Performance. 2008.  Linda B. Sullivan, MBA, has served as vice president of operations at the Metrics Champion Consortium (MCC) since its inception in 2006. The MCC is a nonprofit organization dedicated to the development and support of performance metrics and quality tools within the clinical trial industry. She is a recognized expert in the areas of performance metrics, quality management, and process improvement. Prior to her work with the MCC, she was a management consultant for several global consulting companies. She can be reached at [email protected]. Liz Wool, RN, BSN, CCRA, CMT, has 22 years of experience in the clinical research industry. She is president and CEO of QD-Quality and Training Solutions, Inc. (QD-QTS), a clinical quality systems, training, and auditing consulting firm providing services to institutions, investigators, sponsors, and CROs. QD-QTS has offices in San Bruno, Calif., and Franklin, Tenn. A Certified Master Trainer and instructional designer, she is also a member of ACRP’s Association Board of Trustees and Editorial Advisory Board. She can be reached at [email protected].

Keith Dorricott, BSc

P e r f o r m a n c e

Peer Reviewed

M e t r i c s

Using Metrics to Direct Performance Improvement Efforts in Clinical Trial Management

W

considerations for organizations as they to metrics or as they begin developing a key set of metrics.

t r i a l s

review their approach

C l i n i c a l

describes some key

i n

This article

e are used to the idea of measurement in the general practice of medicine, such as the vital signs taken after a baby is born or during the course of therapy for an illness in an adult. We would wonder what a medical practitioner was doing if he or she failed to take measurements such as blood pressure, heart rate, cholesterol, etc., when providing medical care and then compare those measurements to established norms. Measuring is fundamental to our ability to understand and control the world we live in, and this is particularly true for scientific disciplines; it is part of the scientific method. In clinical trial management, if you want to know how enrollment is going for a particular trial, you might look at the enrollment rate or the number of subjects enrolled. You might want to compare these to your initial expectations to see if you are on track and take remedial action if necessary. Without a defined process for measurement—a method, a way to capture data for review—what would be the point of taking the measurements in the first place? A metric has been defined as “a standard of measurement.”1 Metrics are essentially the definitions of how we collect data on measurement and the value of those measurements once they are made. Many organizations recognize the need to measure (to use metrics), but their measurement systems have typically built up over time and have not been put together from a strategic perspective. Hammer2 claims that across all the organizations with which he works, there is a wide consensus that they measure too much or too little, they measure the wrong things, and they do not use the metrics effectively.2 This article describes some key considerations for organizations as they review their approach to metrics or as they begin developing a key set of metrics. The overall approach is shown in Figure 1.

Measurement Needs a Purpose In the general practice of medicine, there are myriad things you could measure. However, if you attempted to measure everything, there would be a substantial cost and the medical practitioner would be overloaded with all the data. The particular measurements that are useful will depend on the circumstances; measurements of the health of a newborn baby, for example, will be very different from those of someone who has high cholesterol. Similarly, in clinical trial management, there are many things you could measure. Often, companies attempt to measure and track large numbers of metrics simply because they can.3 If you measured and reported all possible metrics across a set of clinical trials, the cost would be significant. You

x

Peer Reviewed    9

Figure 1  Selecting and Implementing a Metric Select/Define your metrics and targets

Determine the purpose of measurement

Think “Big Picture” Drive value in the metrics – use the “so what?” test

would be completely confused about how to interpret all the data and would not have the resources to tackle all the questions that would arise, resulting in inaction. You would have the cost of data collection and reporting, but no outcome. Considering these factors, a typical flow for how a metric might be used is shown in Figure 2. To get through the steps of selecting and implementing a metric (Figure 1) and the steps involved in using a metric (Figure 2) involves many resources and their associated costs. Every metric is a balance of that cost versus the benefit you can get out of the metric itself. There are only a relatively small number of key things that are really useful to measure in a given circumstance (perhaps up to a dozen); these are often termed as the “key performance indicators.” So how do you go about determining those vital metrics? A key consideration that will help you focus on the important metrics is

Have a mix of metric types

Start small

Measure process not people performance

Metrics definitions – use of industry standards

determining the purpose of measurement.4 For example, the purpose of measurement in clinical trial management might be: ●● For

a contract research organization (CRO) to be able to demonstrate oversight for the trials in its control to ensure timely, accurate, actionable data. ●● To reduce the time to conduct clinical trials. ●● To maximize the success of applications of new drugs to regulatory authorities. As described in the following sections, once you have determined the purpose of your measurement, there are a number of other key considerations.

Think “Big Picture”

Are data “on track”?

Continue to monitor

x

details are important, but so is the overall composition. Treating symptoms individually without considering them together—and whether there is a common underlying cause—would not be in the best interests of the patient. Thinking “big picture” from the perspective of those who are going to use the metrics can help to narrow down the metrics that you plan to collect.

Thinking “big picture” from the perspective of those who are going to use the metrics can help to narrow down the collect.

As with a masterpiece painting, in the general practice of medicine, the little

No

Root Cause Analysis

Yes

10    Monitor August 2012

Program and validate

metrics that you plan to

Figure 2  Using a Metric

Review and interpret

Determine how you will collect, display and use the metrics

Agree and take actions to get the metric “on track”

Part of thinking about the big picture is to select a mix of different metrics types. Having different types of metrics in your measurement system helps to minimize the chance of suboptimization.4,5 For example, focusing only on speed could make an activity faster; but if it adversely affects quality, then subsequent activities can be undermined and the overall effect might be to increase the length of the trial. Generating a protocol quickly might be desirable, but not if there are underlying quality issues that mean costly, time-consuming protocol amendments are needed later.

Some different types of metrics you should consider are shown in Table 1, along with examples and the risk of sub-optimizing by focusing only on a specific metric type. Note that a metric is typically either a lagging or leading indicator, and an indicator of one or more of the factors of cycle-time, timeliness, efficiency, or quality. The bad news is that some of the most important things are not always measurable.6 For example, many measurements are possible for a newborn baby, but can you measure the instinct of the midwife who looks at the baby and says he looks good and healthy? The right metrics can certainly help you manage the business, but they will never tell the whole story. Keeping the big picture view helps you to realize when to be cautious in using particular metrics without others, or in relying too much on metrics that might be leading to suboptimization.

Some Metrics Contain More Value Some metrics are inherently more useful than others; they can tell the story that would otherwise need several “lesser” metrics. A good way to determine if you have selected one of these more powerful metrics is to use the “so what?” test.4 If you were to gather data on that metric, what would you do with it? What action might it drive? If you cannot think of actions that would result from collection of the data on a particular metric, it may not be of value to use that metric. For example, you might be interested in the quality of work performed at investigator sites in relation to the attention the sites have received from monitors. So you might select the number of investigator site audits in the last three months as a metric (see Table 2). Imagine you now have the data: There were two audits. So what? You don’t know how many audits there should have been; you don’t know how

many sites could have been audited; and you don’t know the result of the audits. Of course, you could collect a variety of metrics that would capture those other details, but perhaps there is a single metric of more value, such as the number of critical observations? Imagine you now have the data: There were four critical observations. So what? Maybe there were 50 audits?

If you cannot think of actions that would result from collection of the data on a particular metric, it may not be of value to use that metric. Perhaps you could measure the mean number of critical observations per site

Table 1  Metric Types Metric Type

Description

Example Metric

Risk of Focusing on This Metric Type Only

Leading Indicator

Provides information that you can act on immediately to get the trial/ process back on track.

The proportion of sites activated versus expected would be a leading indicator for whether subject enrollment is likely to be on track.

Lack of data to help with process understanding and improvement

Lagging Indicator

Provides information that you can use for future trials or for baselining for process improvement efforts.

The time taken from “last subject last visit” to database lock.

Lack of data to affect current work and act before negative consequences occur

Cycle-time

Measures the time taken to complete a task.

The median time from subject visit to data entry into an electronic data capture system.

Faster cycle-time with poor quality leading to a process needing to be repeated unnecessarily; longer overall cycle times

Timeliness

Measures whether a particular milestone has been met.

The number of days between planned and actual dates of the first site activated.

Meeting the timelines, but using excessive resources and not at the required quality level

Efficiency

Measures the amount of resource required to complete a task or set of tasks versus that expected.

The difference between the actual final total contract value and the initial baseline contract value for a CRO running a clinical trial.

Process using minimal resources, but not meeting timelines

Quality

Measures how well an output from a process meets the requirements of the customer of that process.

The proportion of expedited safety reports that are received by regulatory authorities within the required timelines gives an indication of the quality of the pharmacovigilance reporting process.

High quality outputs, but missing timelines and with high cost

x

Peer Reviewed    11

Table 2  Building Value into Your Metrics Data

Number of investigator site audits in the last three months

2

Knowing this tells us nothing about the quality.

Number of critical observations in the last three months

4

Possibly a cause for concern, but we do not know how many audits there were.

Mean number of critical observations per site audit in the last three months

2

Definitely sounds like a cause for concern. We would want to take action and investigate further.

audit? Imagine you now have the data: There were two. So what? Here you have some actionable data; having an average of two critical observations per site audit would be a real cause for concern. You would want to understand the root cause, finding out which sites were audited and looking for systemic issues, such as a confusing protocol or poor training of site staff. This one met-

So What? Increasing Value

Possible Quality Metric

importance of the definition of a metric. The definition should be written down so that there is no ambiguity; this definition can be used when trying to understand why the metric is at a particular value. There are industry organizations that have developed standardized, defined metrics for use in clinical trial management,7 and the potential metrics they provide can

Using a metric to measure the performance of people may induce people to spend their time trying to meet the target—often by any means—rather than to focus their efforts on trying to improve the process itself. ric has high value, as it gives an indication of quality of work at investigator sites and by monitors. It best matches the purpose of interest to you. In a similar way, if you are looking for a metric to indicate whether a clinical trial is on track, measuring the number of sites that have been activated does not pass the “so what?” test. Measuring the proportion of sites activated out of the total expected has more value; even better, however, would be to measure the proportion of sites that have been activated out of those expected to be activated at that particular time. This gives an immediate indicator of whether you are on track for activating sites, and there are clear actions you could take. It passes the “so what?” test. Considering the value inherent in different metrics also brings out the

x

12    Monitor August 2012

be used as a starting point for metrics selection. Using standard metrics also makes it easier to benchmark to compare performance across organizations and help to drive continuous improvement.

Metrics Should Measure Process Performance, Not People Performance As you select your metrics, you should focus on the process rather than using the metrics to measure people. By using a metric to measure people, there is a high risk of sub-optimization, as individuals focus on the metric to the exclusion of everything else and “gaming” or “cheating” can result.8 Seddon describes the impact of implementing a metric by looking at the percentage of times ambulance

staff reach “Category A” incidents within eight minutes of an emergency call.9 Using the metric to assess manager performance led to misreporting, such as “Category A” calls being reclassified as “Category B” when the eight-minute goal was not met and “Category B” calls classified as “Category A” when crews arrived within target. Also, varying the definition of the start and end times meant that eight minutes to one authority could be 10 minutes to another. As seen in the case just described, using a metric to measure the performance of people may induce people to spend their time trying to meet the target—often by any means—rather than to focus their efforts on trying to improve the process itself. It is one of the best ways to make people lose the “big picture” and sub-optimize. It would be better to involve the staff in determining appropriate metrics more clearly related to the purpose of the process—the health outcome for the patient—and then to get them to focus their efforts on using the metrics to understand their process better and to improve the process for everyone.

In Conclusion Finally, you need to consider how you will display, validate, review, and act on the metrics. There are many systems that can be used for display, from basic Excel® spreadsheets to specific software designed for the purpose. Ensuring the data are accurate (i.e., validating) is an important step to give confidence to those who are going to use the metrics. All these efforts will be of little value, however, without a process like that shown in Figure 2 to review the metrics so that they can be used to drive decisions and actions. In the general practice of medicine, measurements are fundamental. Similarly, measuring the clinical trial process can give valuable information

for use in tracking, understanding, and improving process performance. The number of metrics you track should be kept small (around eight to 12), to allow the organization to focus on what really matters and to minimize cost. There should be an overall purpose of measurement that will help you when selecting appropriate metrics to track and review. Ideally, metrics should represent more than a simple counting of items, and should cover a range of different types, such as lagging, leading, timeliness, cycle-time, quality, and efficiency. Having different metric types from across the process ensures that they complement each other and provide a better overall picture of performance. However, watch out for metrics that focus on improving one area to the detriment of another. The most crucial consideration is that the metrics are actually used—not for managing people performance, but

to understand and improve processes. If particular metrics fail the “so what?” test, then consider removing or replacing them. Ideally, metrics measure the performance of systems and processes, and analysis of them should help direct your efforts in process improvement. By careful use of measurement in clinical trial management, we extend the scientific method beyond the science of the trials themselves, and that science reminds us of the really “big picture”—that the fundamental purpose of our efforts as clinical researchers is to improve patients’ lives.

References 1. Merriam-Webster Online Dictionary, www. merriam-webster.com/dictionary/metric. 2. Hammer M. 2007. The seven deadly sins of performance measurement and how to avoid them. MIT Sloan Management Review 7(43). 3. Nelson G. 2008. Implementing metrics management for improving clinical trials perfor-

mance. BeyeNETWORK. Available at www. b-eye-network.com/view/7981. 4. Zuckerman DS. 2006. Pharmaceutical Metrics. Gower. 5. Sullivan L. 2011. Defining “quality that matters” in clinical trial startup activities. The Monitor 25(7): 22–6. 6. Nelson LS, quoted by Deming WE. 1982. Out of the Crisis. MIT Press, p. 121. 7. Metrics Champion Consortium, www.metrics champion.org/default.aspx. 8. Pyzdek T. 2012. Gaming the metrics—use metrics to guide improvement, not measure the performance of people. Quality Digest. Available at www.qualitydigest.com/inside/qualityinsider-column/gaming-metrics.html. 9. Seddon J. 2005. Freedom from Command and Control. Vanguard Education, p. 213.  Keith Dorricott, BSc, is director for operations management, process improvement, and metrics at INC Research in the United Kingdom. He is an active member of the Metrics Champion Consortium, and worked with the organization to launch the Process Improvement Work Group in 2009. Prior to his seven years working on improving the clinical trial process at various contract research organizations, he was technical manager at Eastman Kodak manufacturing. It was at Kodak that he honed his skills in process improvement techniques, including Six Sigma and Lean. He is a Lean Sigma Master Black Belt. He can be reached at [email protected].

Your needs are unique. Choose the IRB with the flexibility to fit your needs, with ethics and integrity. New England IRB is the premier, AAHRPP-accredited, central IRB, providing quality study review services across North America. Single Point of Contact FastTrack™ Web Portal for secure document exchange and updates One-week Protocol Review Turnaround 24 – 48 Hour Site Review Turnaround

Contact us to discuss your next study.

85 Wells Ave | Newton, MA 02459 www.neirb.com | [email protected] 617.243.3924

Learn more

x

Peer Reviewed    13

Specialized Clinical Trial Management Systems Now Reimb with Patien t ursem ent Ca rds!

Trial Management Organizations and Investigator Site Networks

Investigator Sites and Research Groups

Maintain control across all sites in everything from study start-up activity and project management through site payment. Account for all costs up front, reconcile payments, and automate payments to sites, investigators, vendors and subjects. Get visibility and improve processes with dashboards, streamlined workflows, and project management tools.

Get real-time visibility into study conduct across the entire clinical trial portfolio. Allegro CTMS@Site helps maintain the financial health of clinical trial operations from invoicing to payments. The system improves efficiency and compliance with facilitated visit management, reporting, audit trails, document management, and now, patient reimbursement cards.

To learn more about the Allegro family of cloud-based, easy-to-use clinical trial management systems, visit www.ForteResearch.com/allegro. Forte Research Systems, Inc.

Madison, Wisconsin USA (608) 826-6002 [email protected] http://www.ForteResearch.com/allegro Innovating through Collaboration®

Paul Hake, BEng, ACA

P e r f o r m a n c e

Peer Reviewed

Clinical Metrics 102

M e t r i c s

Best Practices for the Visualization of Clinical Performance Metrics

T of clinical performance data and provides attributes in a metrics system.

  Home Study article Learning Objective

After reading this article, participants should be able to understand the value of performance metrics and choose the best chart types for common analytical objectives. D i s c lo s u r e s

Paul Hake, BEng, ACA, is an employee of IBM.

t r i a l s

examples of key

C l i n i c a l

the basic visualization

i n

This article addresses

he goal of a clinical metrics system is to improve performance through data-driven decision making; there is no other purpose for a metrics system. Organizations may invest many millions of dollars in constructing elaborate data warehouses and business intelligence platforms. Until someone uses the data to make a decision that reduces costs, improves quality, or reduces time, however, managers are essentially wasting time and money just looking at pretty charts. Metrics are an effective way of communicating performance data and a good way to answer the fundamental question: “How are we doing?” Implicit in any metrics system is the comparison of actual performance to a benchmark or target. This article addresses the basic visualization of clinical performance data and provides examples of key attributes in a metrics system, so that nontechnical or nonstatistical managers can better interpret and manage performance.

Background and Perspective Much has been written1-4 about the importance of first determining the message in any report design. Real-world performance management is more complex, as often we do not know the specific question until after it emerges from a performance issue. Thus, a key requirement is the need for flexibility and for the manager to define and select reports that meet his or her specific needs. Invariably, answering the question “How are we doing?” leads to a follow-on question of “Why is that?” This is where traditional metrics systems can break down, as it is often difficult to follow an analytical thought path from that initial metric report. This activity is commonly referred to as “drill-down” or “slice and dice.” It is critical that the manager can follow this chain of thought within the system, without having to call someone from information technology (IT) support or performing additional manual steps that become barriers to analysis. If this happens, the manager will quickly lose interest and the process breaks down. Data quality and reliability are often issues, but are outside the scope of this article. Also assumed is that the manager possesses the necessary knowledge and skills to actually orchestrate process improvements.

x

Peer Reviewed    15

Dashboards for Clinical Performance Management To effectively measure and communicate performance through a metrics system, we should consider the various chart styles or visualizations that are available. Technology advancements have significantly improved the visual quality of dashboards—user interfaces that organize and present information in ways that are easy to read—but the content remains critical. We are too easily impressed by flashy graphics that fail to convey the most important information clearly. A good starting point should be a dashboard that summarizes the entire portfolio and highlights performance across the entire organization in one place. If it sounds too simple, that’s because it probably is: By attempting to include everything in one chart, we risk overwhelming the manager with too much information. If we are considering the three performance aspects of

time, cost, and quality, we need a way to summarize performance and aggregate individual metrics into overall scores. A good approach is to present a summary view with the option to change perspec-

Technology advancements have significantly improved the visual quality of dashboards—user interfaces that organize and present information in ways that are easy to read—but the content remains critical. tives to focus on perhaps one aspect of performance—time, cost, or quality. The key point is that each manager has different requirements, so flexibility is important. A manager should be able to select different charts to meet his/her specific needs. Nowhere is this more important than during the actual analysis phase (answering the “Why?” question). There the specific performance issue that is the subject of analysis is dependent on the unique

Figure 1  High-Level Clinical Performance Dashboard

x

16    Monitor August 2012

data, specific goals, and requirements of that manager. Figure 1 is an example of a summary clinical performance dashboard. This dashboard has blinded data, but is

based on a real example. It highlights some of the key features and qualities inherent in good dashboard design. The very top row has dropdowns (or prompts) to select different filters. The manager can filter by time period, study phase, and therapeutic area. In the example, we are looking at Quarter 1, Phase I oncology studies. Establishing a time frame of reference is critical, and is a common omission in poorly designed dashboards. It should

Figure 1a  Patient Recruitment

Figure 2­  Comparison of Gauge Chart and Column Chart for the Same Data

Average: 382 Target: 371.5

# Studies 23 Std Dev 59.2

be immediately obvious which time frame we have selected, and that indicator should be included in practically every chart we use. The prompts or filters are a way of scaling a dashboard to many users and providing focus. It makes no sense for IT to design a separate report for each therapy area with the filters “hard coded” into the report. It is more efficient to design one report with a userfilter so that we can use the same physical dashboard for all therapy areas. The top section of the dashboard in Figure 1 contains a series of metrics or key performance indicators (KPIs), represented as green balls or yellow diamonds, which summarize the performance across a range of variables. The data are aggregated based on the prompt selections, and the color and shape of the graphics indicate current performance to target or benchmark, the numerical value, and the trend. Note that the trend is often more important than the absolute value. A substantial amount of data is represented in this dashboard. It includes the number of studies and the performance targets, and is focused on time metrics and enrollment performance. Notice in Figure 1a that the “Patient Recruitment” indicator is green, but the smaller “Trend” indicator is red. Also indicated are the average across studies, the standard deviation, and the target. Below the main metrics “bar” are a smaller set of indicators that communicate the timeliness of certain key events, such as the “first patient first

Figure 3  Pie Charts

visit.” This represents a fairly broad example of KPIs for the time aspects of clinical performance. The dashboard section to the bottom left in Figure 1 is a conventional column chart that communicates supply and demand for headcount or fulltime-equivalent resources. The main critique of this dashboard concerns the gauge charts, also shown in Figure 2. This “screen real estate” would be better occupied by a financial performance chart or some quality metrics, such as error rates or number of protocol amendments. Gauge charts can be difficult to interpret, but are often included in a dashboard as they look flashy. Compare the gauge chart to a simple column chart, as shown in Figure 2. Which chart in Figure 2 is better? Which chart makes it easier to understand the performance trend from last month for these KPIs? In this case, the column chart is easier to read and it better conveys the message intended from comparing the KPIs of this month to last month.

Pie Charts Although also ubiquitous, pie charts are generally not the best way to communicate data. This is illustrated by the following scenario, in which we want to compare the size of our studies based on number of patients. A standard two-dimensional (2-D) and a three-dimensional (3-D) pie chart are compared in Figure 3. Notice how the 3-D pie chart is slightly more difficult to interpret than the 2-D version? It is harder to compare the slices in 3-D, which really struggles to add anything to most charts. Now consider how these data would look as a bar chart, as shown in Figure 4. Figure 4  Bar Chart

x

Peer Reviewed    17

Figure 5  Repeated/Grouped Bar Chart

The bar chart communicates both the relative size of each study and the factual number of patients better than the pie chart. Bar charts are typically better than pie charts for illustrating relative size or proportion.

Bar and Column Charts Bar charts are a standard component of most dashboards, and they are very effective. A good way to display data with lots of subgroups is a grouped or repeated bar chart, like the one in Figure 5. The mixing of lines and bars on the same chart should generally be avoided. Lines between unrelated categories are entirely misleading, as they imply a trend where none exists.

Line Charts Line charts are a simple but effective technique for showing trends over time. They can be stacked together to represent multiple data series. A clas-

x

18    Monitor August 2012

sic line chart for clinical trial performance management is the cumulative enrollment chart (often referred to as an s-curve chart—see Figure 6). Line charts can also be supplemented with averages and trend or forecast margins, not shown in this example, but highlighted in Figure 8, the enrollment runway scatterplot. The line chart is another example in which adding 3-D (known as a ribbon chart) detracts from the ability to interpret and read data values from the chart.

Scatterplots and Predictive Analytics Scatterplots are used to show correlation or relationships between two variables plotted on the X and Y axes. We will avoid going into details about statistics in this paper, but scatterplots are incredibly useful for understanding underlying relationships and can be used as predictors for future per-

formance. Predictive analytics and regression analysis also are outside the topic of this paper, but are underused in most performance management and forecasting processes. Figure 7 shows an example of a scatterplot being used to visualize the relationship between a protocol quality score and the number of protocol amendments. A quality score is an aggregate of various lower level metrics and aims to summarize the overall quality of a protocol. Note that the chart in Figure 7 has prompts for Year + Month and TherFigure 6  Line Chart

apy Area. This example illustrates the entire portfolio as of September 2011. Each study is represented by small circles, and the relationship between the number of amendments and the quality score is readily apparent; there is a strong trend downwards from top left to bottom right. As the number of amendments increases, the quality score, ranging from 0 to a maximum of 100, decreases. Note also that the average score of the portfolio is 62. Hovering above the circles reveals the study name. The importance of flexibility should be apparent from looking at this chart. It clearly answers the question: “How are we doing?” but immediately begs further questions along the lines of “Why?”

As subject enrollment tends to be one of the key performance drivers for a clinical trial, it warrants extra attention

Figure 7  Scatterplot—Quality vs. Protocol Amendments

Each dot represents one study

Dotted line shows clear relationship between amendments and quality

vertical (or “Y”) axis of the scatterplot. The horizontal axis measures the elapsed time as a percentage of planned time. If a study had fully enrolled on time, it would be represented on the plot as an “X” at the 100%/100% inter-

section, at the top of the yellow “runway.” The yellow band represents 0 to –20% underenrolled, and this is our warning zone, green is enrolling ahead of plan, and red is significantly behind plan.

Figure 8  Enrollment “Runway” Chart

in terms of monitoring and analysis. As subject enrollment tends to be one of the key performance drivers for a clinical trial, it warrants extra attention in terms of monitoring and analysis. The chart in Figure 8 is an effective way to visualize enrollment performance for a group of studies, and is basically a scatter plot like the ones above, but with a few enhancements. This particular enrollment runway chart example effectively summarizes the enrollment performance for a portfolio of neuroscience studies. The measure of interest is the cumulative recruitment percentage that should be targeted, as shown on the

x

Peer Reviewed    19

Figure 9  Anatomy of a Box and Whiskers Chart

60 outliers

Measurement (units)

50 40

maximum value, excluding outliers range includes all data except outliers

30 20

interquartile range contains middle 50% of values

10

75th percentile median value 25th percentile minimum value, excluding outliers

0

from lowest to highest and selected the one in the middle. Thus, 50% of the scores are inside the range represented by the blue box for each sponsor. The “whiskers” represent the top and bottom ranges of data that are not considered outliers. Outliers, such as the example on the top of the left chart of Figure 10, are represented as individual points outside the B&W range. Outliers are considered unusual scores that one would be wise to investigate.

Clinical trials could run more efficiently if managers had

Box and Whiskers The box and whiskers (B&W) chart is often underused, as it is considered too statistically technical and complex (see Figure 9). This is unfortunate, since the B&W is excellent for summarizing and highlighting the spread or range of data. Understanding the range of data about a median can help managers make better decisions by drawing their attention to outliers and patterns that deviate from expectations. The B&W chart helps us visualize the spread or range of our data. The “box” represents the middle 50% of our values; the “whiskers” extend to cover most of the data points above or below this middle 50%; and outliers are shown as separate points or dots on the chart. B&W charts measure only one thing at a time (e.g.. enrollment, number of errors, etc.), but can be stacked side by side to compare different categories (e.g., by study number or by country/region). Figure 10 is an example comparing quality scores across contract research organizations (CROs) and

x

20    Monitor August 2012

sponsors. We are obviously assuming the availability of blinded and shared performance data to produce this style of analysis. The individual blue boxes represent ranges of quality scores for different sponsors (leftside chart) and CROs (rightside chart). The blue “box” represents the range of scores for the middle range 25–75% of the data. The black line in the middle of the box represents the median score; this is the value if we arranged all scores in order

reliable and accurate performance data presented in a format that highlighted important trends, variances, and exceptions. The B&W chart provides a powerful way to understand quality by com-

Figure 10  Box and Whiskers Showing Quality by Sponsor and CRO

paring the sizes of the various boxes and whiskers, with a small box implying higher consistency. Thus, a small box at the top of the quality score axis implies good and consistent performance, and a longer box with longer whiskers could indicate quality or management issues.

Conclusion Managers are drowning in data, but seldom are provided with critical performance insights in the right format to make decisions. Managing clinical trials is complex, and better visualizations would free up valuable time spent manually constructing reports and charts in spreadsheet programs. Clinical trials could run more efficiently if managers had reliable and accurate performance data presented in a format that highlighted important trends, variances, and exceptions. Each style of visualization has strengths and weaknesses; some are overused and some are underused. Chart styles are often selected not for their suitability to communicate the data, but because they have flashy 3-D graphics that make the dashboard tool look good. Statistically oriented visualizations, such as the B&W chart, offer an effective mechanism to quickly understand data in context and focus on its important characteristics. Comparison is the key—comparing performance by study, vendor, region, phase, and endpoints. We have not examined all the potentially useful charts and analyses that are available, but have aimed to cover the most significant categories and point out what works and what should be avoided. This has hopefully helped you to generate ideas on your way to more effective performance management.

Table 1  Summary of Chart Styles and Use Chart Type

Appropriate Use

Pie Chart (2-D)

Show proportional relationship of a few components of a whole (100%)

Bar and Column Charts

Show ranking or comparison of items

Line Charts (2-D)

Show trend over time

Runway Charts

Show cumulative data (e.g., patient recruitment) over time in relation to a target range

Scatterplots

Show correlation/relationship of two variables

Box and Whiskers Charts

Show the spread or range of data compared to the median value and highlight extreme data outliers

Table 1 presents a summary of the chart types highlighted in this article and their suggested uses. The key learning points are as follows: ●● Flexibility

is important. Our starting point is answering the question, “How are we doing?” but answering the follow-on questions of “Why?” and “What should we be doing?” adds more value, especially if the answers can appear from analysis of data without a lot of manual effort. ●● Don’t be fooled by flashy graphics that can’t be easily interpreted (e.g., 3-D charts). ●● Learn to use scatterplots and B&W charts; they are effective at summarizing data and revealing hidden characteristics. Both are essential for analytics.

●● Monitor

the trends of metrics and performance indicators, as well as the current value. ●● Don’t have too many KPIs. Look at using aggregated quality measures to summarize the detail.

References 1. Few S. 2004. Chart Design: “Show Me the Numbers.” Analytics Press. 2. Tufte ER. 1997. The Visual Display of Quantitative Information. Graphics Press. 3. Zelazny G. 2001. Say It with Charts. McGraw Hill. 4. Laursen G, Thorlund J. 2010. Analytical Decision Making: “Business Analytics for Managers.” Wiley.  Paul Hake, BEng, ACA, is the executive for Global Healthcare and Life Sciences at IBM Business Analytics, where he is responsible for business analytics software solutions. He has 12 years of experience implementing and managing performance management systems in a research and development environment. He is currently enrolled in the Master of Science in Predictive Analytics Program at Northwestern University, and can be contacted at [email protected].

x

Peer Reviewed    21

Discover the pathway to career success. NOW ONLINE! Clinical Trials Management and Regulatory Compliance Certificate Program at the University of Chicago Learn online from experts in the field in this rigorous and comprehensive program at the University of Chicago. Experience the accessibility and flexibility of an online program, or the convenience of our 3-day seminars in downtown Chicago. Gain the qualifications to take the SoCRA Certification Exam. In this program you will master the: ƀ Current principles and practices in medical

ƀ Mechanics of planning and managing a

ƀ Laws, regulations, protocols. and ethical

ƀ Statistical concepts in study design and

research and research study design

standards governing clinical trials and testing on human subjects

study site

result evaluation

ƀ Ability to achieve clarity and precision

in reporting study outcomes

CONVENIENTLY SCHEDULED FOR WORKING PROFESSIONALS. AFFORDABLE.

Courses begin soon. Enroll today.

grahamschool.uchicago.edu/go/CTACRP [email protected] 773.702.5537

David S. Zuckerman, MS

P e r f o r m a n c e

Peer Reviewed

What Gets Measured Gets Fixed

M e t r i c s

Using Metrics to Make Continuous Progress in R&D Efficiency

I measure what we do, how we do it, and how we interact with others we could increase our effectiveness in getting the right work done with speed, quality, and efficiency.

  Home Study article Learning Objective

After reading this article, participants should be able to identify the various types of metrics required to manage and improve performance in their organization and perhaps develop some of their own metrics. D i s c lo s u r e s

David S. Zuckerman, MS, receives royalties from sales of his book on pharmaceutical metrics.

●● Huge workloads, often resulting in lots of turnover ●● Attempts by groups early in the process to rush or simplify

their work, resulting in lots of extra work for downstream groups ●● Frustration by downstream “customer” groups with the output of their upstream “supplier” groups (e.g., the clinical group, as the customer for the medical group’s protocols, gets frustrated with their supplier’s overly complex protocols) ●● Equivalent frustration in the clinical group concerning the lack of input from its customer groups downstream (e.g., marketing) ●● Disappointing financial results due to cost overruns or late product to market ●● Constant communication lapses and confusion about what to expect and who is doing what ●● Eventual surprise and disappointment when things turn out poorly, even though much of it could have been predicted (e.g., low enrollment)

t r i a l s

to get it done, then

C l i n i c a l

If we could really

i n

’ve worked with many pharmaceutical, biotech, and device companies over the years, and I’ve come to notice a pattern in drug and device development: Success comes despite huge obstacles, more of which seem to be organizational and process-driven than technical. See if any of these sound familiar:

It doesn’t have to be this way. Other industries (e.g., electronics) and hugely successful companies (e.g., Apple) run incredibly efficient research and development (R&D) organizations that churn out fantastic new products on incredibly short timescales. Certainly, they don’t have the vagaries of biology to deal with, but it’s clear that we in biopharma-device R&D could be much more effective. That’s what metrics are all about. If we could really measure what we do, how we do it, and how we interact with others to get it done, then we could increase our effectiveness in getting the right work done with speed, quality, and efficiency. It’s really quite simple: what gets measured gets fixed OR if we can’t measure it, we’ll never be able to fix it. Here’s an example: If you tell one of your department heads that he must reduce costs—and that you will link his bonus to that measure and that

x

Peer Reviewed    23

measure only—he will immediately attempt to comply, in all likelihood by reducing staff, which endangers both the quality and the speed of his department’s work. On the other hand, if you tell him that cost is not an issue, but he must increase quality, he will immediately go out and hire staff to provide more careful execution, quality checks

Metrics provide both the reinforcement and the feedback mechanism for your strategies and goals, as shown in Figure 1. If you align your metrics with your goals and strategies, not only will you provide reinforcement to your departments and teams, you will receive real-time feedback about how you’re doing in implementing those strategies

Measure cost, and cost will get fixed at the expense of

Figure 1 Metrics Provide Both Reinforcement and Feedback

Vision/Mission

Strategies

Measure and Adjust

time and quality. Measure quality, and quality will get

Plans to Achieve Strategies

fixed at the expense of cost and time. Measure all three, and the organization will work on all three. and reviews, and slow things down to ensure time for double and triple checks. If you tell him he must do all three—reduce cost, increase quality, and reduce time—he will redesign processes, automate, develop new tools, and whatever else he and his team can think of to accomplish all three goals simultaneously. So measure cost, and cost will get fixed at the expense of time and quality. Measure quality, and quality will get fixed at the expense of cost and time. Measure all three, and the organization will work on all three. In short, if you measure it—and motivate people to improve the measures—they’ll respond. They may not like it, and it may take a long time to accomplish, but they will respond. Furthermore, if you tell your department head to do one thing, but measure and reward something else, she’ll focus on what’s being measured rather than what’s being touted. If you tell her that you really, passionately want her to increase quality, but measure and reward her for cutting costs, then costs will be cut. Quality will go up only if it can be conveniently done at the same time with no extra effort. In all likelihood, quality will actually decrease (and time will stretch out, because it’s not being measured or even discussed). In short, measurements can reinforce declared strategies or work against them, but what gets measured is what will get fixed.

x

24    Monitor August 2012

Implement

and be able to manage the performance of your organization, departments, teams, and employees. So it’s critical to align your metrics and incentives with your rhetoric and goals. Do that, and you’ll absolutely make progress on your goals. It’s really quite simple, but requires continuous, consistent attention by management.

What to Measure Figure 2 shows the categories of metrics that you need to use to measure and track progress in your organization. I recommend that you start with

Deploy

© David S. Zuckerman 2006. Reprinted with permission.

a Strategy Map to help you identify the best set of metrics for your organization.1,2 Figure 3 shows an example of a Strategy Map for a small oncology biopharmaceutical firm. Performance Metrics

We’re all familiar with cycle time metrics, which concern the time to accom-

Figure 2  Good Metrics Systems Maintain a Balance in Multiple Dimensions Timeliness

Efficiency

Quality Financial Cycle Time

Customer Satisfaction

Performance

Organizational Growth © David S. Zuckerman 2006. Reprinted with permission.

Figure 3  Strategy Maps are the Basis for Metrics Selection in Each Dimension

Bring new cancer drug to market as quickly as possible

Goal

Financial Perspective

Maintain financial backing & solvency

Customer Satisfaction Perspective

Build interest in physician community

Performance Perspective

Publish and attend meetings

Organizational Growth Perspective

Maintain efficient clinical operations

Build interest by big pharma

Efficiently conduct trials and achieve marketing approval

Management focus on project quality and team performance

Build partnerships with CROs

Active project tracking and problem prevention

Employee focus on problem prevention

Accurate project planning

State-of-the-art planning & tracking systems

State-of-the-art CRO management practices

© David S. Zuckerman 2006. Reprinted with permission.

plish a task such as protocol development, site initiation, or enrollment. Most of what we measure tends to fall into this category. However, it’s important to look at three other performance measurement categories: timeliness, quality, and efficiency. Sometimes it’s important to hit a particular milestone (e.g., on-time study completion). Timeliness measures tend to be important at the beginning and the end of a project, whereas cycle time measures tend to be more important in the middle. Hence, it’s best to start and end on time, and do everything in between as quickly as possible. However, cycle time and timeliness are only part of the picture. We need to accomplish our tasks with a minimum of errors and rework (e.g., queries, amendments, and low-enrolling

sites) while minimizing our resource expenditures (staff hours and money). Hence, we need quality and efficiency measures as well. I recommend a balanced set of timeliness, cycle time, quality, and efficiency measures; perhaps three of each, resulting in a dozen performance metrics

Based on the Metrics Champion Consortium’s clinical trial performance metrics,3 a typical pharmaceutical R&D organization might use the set of performance metrics shown in Table 1. In the case of the oncology biopharmaceutical firm example (Figure 3), we might substitute one or two measures

Even the most altruistic, not-for-profit organization must at least break even financially if it is to survive. total. These metrics should be distributed over the life of a project from start to finish, but should be concentrated more toward the front end of the project. The front-end metrics provide the opportunity to identify problems early and make corrections before things get out of hand and outrageously expensive (see Figure 4).

related to scientific meetings (e.g., percentage of oncology meetings where we make presentations), but most of the measures in Table 1 are applicable. In addition to performance metrics, we need to include the other aspects of what can be thought of as a “balanced scorecard”: financial, customer satisfaction, and organizational growth.4

x

Peer Reviewed    25

Figure 4  It’s Much Less Expensive to Identify and Fix Problems Early in a Project High

Cost to Fix a Problem

Low Start of Project

Early Indications

First Manifestation

Full-Blown Problem

© David S. Zuckerman 2006. Reprinted with permission.

patients and physicians don’t want or that payers won’t include in their formularies. Meanwhile, your supply chain (CROs, labs, vendors, sites) has to be satisfied as well. Your sites will perform better if they actually want to work with your company, and your CROs and vendors will be more proactive if their staffs really enjoy working with you. In this sense, sites, CROs, and vendors are your customers, as well. If they don’t enjoy working with you, they won’t bother to tell you about problems that exist in your CRF or bring up that great new idea that could save you months of time and effort.

Table 1  A Typical Biopharmaceutical R&D Performance Metrics Set Timeliness

Cycle Time

Quality

Efficiency

On-time contract research organization (CRO) contract execution

Approved protocol to approved case report form (CRF)

Protocol quality score

Percentage of sites activated on-time

Approved protocol to first site activated

Site selection quality score

CRO budget and pricing accuracy

On-time invoice payments

Site activation to first subject first visit

Site performance quality score

Drug supply cost accuracy

Financial Metrics

It’s fine to turn out great products, but if every product costs more money to develop than it yields in profit, even the most robust company will eventually go bankrupt. Furthermore, lest you think that this is all about money, even the most altruistic, not-for-profit organization must at least break even financially if it is to survive; so, successful financial results are imperative regardless of the product being produced or the motivation of the organization. There are two categories of financial metrics: sales-related and cost-related. In general, a company wants to increase sales and decrease costs. However, in an R&D organization there are no sales, so the financial metrics must focus on making the organization more cost efficient. Some useful financial metrics for clinical operations groups include: ●● Cost

x

per clean data point

26    Monitor August 2012

Project earned value

Sites, CROs, and vendors are your customers, as well. If they don’t enjoy

●● Cost

per subject (normalized to therapeutic area and protocol complexity) ●● Budget accuracy Your Strategy Map will guide you to the best set of financial metrics for your situation. Customer Satisfaction Metrics

Whether your customer is the marketing and sales arm of your company or the physicians and patients who use your product, it is critical to make sure that the folks who use the output of R&D will be happy with what they’re getting. This may seem silly to some in R&D; after all, isn’t R&D the engine that drives the company? So shouldn’t R&D be the main decision maker in what gets developed? At first glance, these seem reasonable questions, and traditionally R&D has indeed been the main decision maker. However, there’s no point in creating something that

working with you, they won’t bother to tell you about problems. Customer satisfaction metrics are primarily survey-based, and there are many high-quality survey tools available.5 Some powerful customer satisfaction metrics include: ●● Number

of “key opinion leaders” that favor your company and product ●● Your marketing group’s view of R&D as a partner ●● Site satisfaction after working with your company or your CRO ●● Collaboration excellence in your CRO relationships Organizational Growth Metrics

The category of organizational growth metrics underpins the other three because you can’t produce good performance, customer satisfaction, and financial results without a strong organization. Included are organizational competencies and skill levels, employee satisfaction, employee career growth, leadership, technologies, information

management, and culture. A strong set of these capabilities at all levels of the organization is critical if your organization is to achieve real quantum leaps in performance. Some useful metrics are: of key systems that are state-of-the-art ●● Quality and effectiveness of project teams ●● Levels of respect, trust, and communication in the organization ●● Degree of engagement of senior management ●● Staff retention, quality, and hiring effectiveness

Figure 5 Cascading the Strategy Map or the Metrics Allows Every Individual and Group to Align With the Top-Level Goals and Metrics Cascading Strategy Maps

Cascading Metrics

R&D

Strategy map

Metrics

Division

Strategy map

Metrics

Metrics

Department

Strategy map

Metrics

Metrics

Metrics

Metrics

Strategy map

Metrics

●● Percent

Just as we had to create balance among our internal performance metrics categories (timeliness, cycle time, quality, and efficiency), we need to create balance among the four organizational improvement categories (performance, financial, customer satisfaction, and organizational growth). Hence, the term “balanced scorecard.” This multidimensional balance is shown in Figure 2.

How to Create the Right Measures No two organizations will end up with exactly the same set of measures. Some organizations will focus on creating first-in-class, best-in-class therapies; others will focus on lower cost, followon therapies. Some will go for a few blockbuster products, and others will go for many smaller market therapies or orphan drugs. Some will focus on specific therapeutic areas, while others will want to be more nimble, responding to “targets of opportunity.”

Since “what gets measured gets fixed,” it is important to make sure that your metrics reflect the vision, goals, and strategies of your organization.

Individual

© David S. Zuckerman 2006. Reprinted with permission.

Since “what gets measured gets fixed,” it is important to make sure that your metrics reflect the vision, goals, and strategies of your organization. As stated earlier, Strategy Maps create this linkage and make it much easier to select the best set of metrics. Although a detailed discussion of Strategy Maps is beyond the scope of this article, the process can be summed up in four sequential questions:2 1. To achieve our vision and goal, what financial strategies must we execute? 2. To achieve our vision, goals, and financial strategies, what customer and supplier satisfaction strategies must we execute? 3. To achieve our vision, goals, financial, and satisfaction strategies, how must we improve our internal operations and performance? 4. To achieve our vision, goals, financial, satisfaction, and performance strategies, how must our organization, culture, competencies, technology, and workforce grow and improve? These four questions should be answered in exactly this order. Trying to address

satisfaction or organization before addressing financials yields unsatisfactory results. I recommend that you first build your vision and goals (goals being the quantifiable aspect of your vision), then develop your Strategy Map. Once you have done this, you will find that the best metrics for your organization are fairly easy to define. Once you have built your Strategy Map and metrics at the top level (e.g., the R&D or clinical operations level), you can cascade your metrics all the way down to individuals within the organization (see Figure 5). You can either cascade the Strategy Map or cascade the metrics themselves: ●● If

you cascade the Strategy Map, each group within the organization takes one of the strategies on the top-level map and uses it as its goal. It then creates a new, lower-level map based on that goal. Each group then creates its own metrics based on its map. ●● If you cascade the metrics, then each group within the organization takes one or two of the metrics as its focus and creates lower level metrics to support it.

x

Peer Reviewed    27

I have found that it’s easier to cascade the maps rather than the metrics, but either way allows individuals doing project work to relate their performance to the overall organizational goals and helps prevent different groups from creating conflicting goals and strategies.

Conclusion Creating and implementing a balanced metric system will allow you to overcome the common biopharma-device R&D problems of excessive workloads, poor quality, rework, confusing/changing requirements, and disappointing outcomes and financial results. However, to be successful, your metric system must be carefully thought out: ●● Using

Strategy Maps will allow you to create metrics that tie back to your goals. ●● Creating a balance between financial, customer, performance, and

organizational growth metrics, as well as timeliness, cycle time, quality, and efficiency metrics will ensure that all aspects of your organization are being measured and improved, so that what you want to fix actually gets fixed. ●● Cascading your metric system from the top of your organization to the bottom will allow every team, group, and individual to focus on the same goals without the risk of conflicting strategies and tactics. By using this methodical approach to creating your metric system, both your organization and performance will rapidly improve.

References 1. Zuckerman DS. 2006. Pharmaceutical Metrics. Gower Press, chapter 3. 2. Kaplan RS, Norton DP. 2000. Having trouble with your strategy? Then map it. Harvard Business Review, September-October 2000.

Four Engaging Courses in an All-New Format Less Lecturing, More Interacting

ACRP Professional Development’s new eLearning Courses will allow you to master the latest and most relevant content in a convenient, interactive, online format. Introduction to ICH GCP Advanced ICH GCP Ethical Considerations for Clinical Researchers ACRP Certification Exam Preparation Course

See all ACRP Professional Development offerings at

www.acrpnet.org/pd

28

x

Monitor August 2012

3. Metrics Champion Consortium. Set of 51 Clinical Trial Performance metrics, available at www.metricschampion.com. 4. Kaplan RS, Norton DP. 1992. The balanced scorecard—measures that drive performance. Harvard Business Review, January-February 1992. 5. For example, see Spiller PT, Zuckerman DS, Bunch DS. 2011. Innovative Measurement and Improvement Techniques for Strategic Partnerships. DIA 47th Annual Meeting, Session 419, June 23, 2011. David S. Zuckerman, MS, is president of Customized Improvement Strategies LLC and the author of Pharmaceutical Metrics from Gower Press. He focuses his work and training courses on reducing risk in clinical development through such strategies as creating metrics and balanced scorecard systems, building state-of-the-art outsourcing alliances and functional outsourcing capabilities, eliminating protocol and outsourcing risk and optimizing performance, eliminating site selection and enrollment problems through stateof-the-art technologies and processes, and implementing change management and process improvement throughout organizations and partnerships. He holds engineering degrees from Princeton University and Washington University in St. Louis, Mo., and can be contacted at [email protected].

CoMIng Soon

Liz Wool, RN, BSN, CCRA, CMT

P e r f o r m a n c e

Peer Reviewed

M e t r i c s

Intertwining Quality Management Systems with Metrics to Improve Trial Quality

M

quality management system that provides with the inherent ability to identify, analyze, and address possible performance issues through the appropriate use of metrics.

t r i a l s

the organization

C l i n i c a l

a targeted review of a

i n

This article describes

anaging quality in clinical trials is the focus of daily activities that, per the Clinical Trials Transformation Initiative (CTTI), substantiate the clinical research community’s ability “to effectively and efficiently answer the intended question about the benefits and risks of a medical product (therapeutic or diagnostic) or procedure while assuring protection of human subjects.”1 At the Drug Information Association’s 2011 Annual Meeting, a presenter from the European Medicines Agency (EMA) provided a description of “quality” as “sufficient to support the decision-making process on medicines throughout the clinical development and postmarketing authorization.”2 With the advent of increasing protocol complexities, technological capabilities, and globalization of research, a prospective, systematic, and methodical approach to ensuring quality in clinical trials is needed and is being advocated by regulators. Additionally, both the Food and Drug Administration (FDA) and EMA are collaborating with all stakeholders in clinical research to define and describe the elements of quality by design (QbD) in the context of clinical trial conduct. QbD incorporates the elements of a quality management system with benchmarks to the International Conference on Harmonization (ICH) in the ICH Q7 through Q10 documents and the International Organization for Standardization’s ISO 9000 standards for quality management systems. In May 2012, the EMA hosted a workshop that focused on a reflection paper by the agency’s Good Clinical Practice (GCP) International Working Group about risk-based quality management in clinical trials. The group requested input from various stakeholders (including ACRP) on what the QbD elements, context, and framework are for clinical trials. Similarly, the FDA and CTTI are launching the QbD workstream in 2012. As clinical researchers in the 21st century, stating that we have standard operating procedures (SOPs) and training is not defining a quality management system. This article describes a targeted review of a quality management system that, when adhered to in its entirety, provides the organization with steps for defining, planning, monitoring, measuring, and continuously improving the quality of its work, leading to the inherent ability to identify, analyze, and address possible performance issues through the appropriate use of metrics.

Quality Management System Rather than focus solely on GCP compliance, SOPs, processes, forms, and training, there needs to be a renewed focus on “building quality within an organization” that inherently possesses the culture of quality.3 A quality

x

Peer Reviewed    29

management system sets out the standards to be achieved and the method to meet them. The system should define what people, actions, and documents should be employed to conduct the work in a consistent manner, leaving evidence of what has happened. It may include manuals, handbooks, procedures, policies, records, and templates.4 In 1946, the International Organization for Standardization (ISO) was founded, and in 1987, it published the first ISO 9000 standard for quality management systems.5 Many people believe that ISO 9000 focuses only on manufacturing of products; however, ISO 9001:2008 was written such that small businesses (e.g., consulting firms) can implement ISO 9000 for their organizations. The ISO definition for quality states that “the quality of something can be determined by comparing a set of inherent characteristics with a set of requirements.” With this in mind, this article will discuss the quality management system principles espoused in ISO 9000-9001, extrapolating its use in the global clinical research arena. Additional terms that organizations may use to describe their quality management system include clinical quality system, integrated quality management, quality management, or total quality management. From a practical standpoint, it is important to understand how the elements and components of an organization’s quality management system relate to this article’s description of a quality management system in order to perform a comprehensive gap analysis. Due to space limitations, alternative theories and methods will not be discussed here. Kleppinger and Ball, in their article “Building Quality into Clinical Trials with the Use of a Quality Systems Approach,” describe the utility and application of ISO 9000-9001 quality management systems principles for clinical trial planning, execution, ongoing monitoring, and continuous improvement during the clinical trial lifecycle.6 The authors assert that, even though a quality system does not impose something totally new on clinical research, a systematic approach will produce a more reliable and useful end product—that is, high-

x

30    Monitor August 2012

quality data obtained without compromising the protection of human subjects’ rights and welfare. After implementing a quality management system, assessing, monitoring, and measuring how well the organization is performing to the established standards and methods is required. Using visual inspection and confirmation, document review, data analytics and review, and metrics, the organization implements the foundational cornerstones for assessing its performance.

Elements of a Quality Management System The overarching framework for a quality management system is illustrated in Figure 1, which visualizes the critical “plan, do, check, act” approach of a committed organization to quality through the development, implementation, and maintenance of a quality management system.

A quality management system provides the prospective, systematic, methodical, scientifically based framework to plan, manage, monitor, and measure the quality of the organization and its performance throughout the clinical trial and product development lifecycles. Specifically, the quality management system establishes the standards under which work will be performed and how the organization and personnel perform and document their assigned clinical trial activities, duties, and functions. An additional illustration is noted in Figure 2, which outlines the phases of product development for clinical research benchmarking to similar illustrations for the ICH Q10 Pharmaceutical Quality System Guideline.7 This network system of interrelated processes provides uniformity and consistency for people and actions, describing the work performed and how the work is documented, as outlined in Figure 3. In the clinical trials context, this

Figure 1  Framework for a Quality Management System

Performance Dashboards

Risk Management

Quality Policy, Quality Manual Procedural Documents, Document Control

Metrics System

Management Responsibility, Management Review of QMS

Change Control

Quality Management System (QMS) Framework

Issue Escalation

Training

Process Monitoring and Analyses

Corrective and Preventive Action Program GCP Quality Assurance Unit, Annual Audit Plan

Process Improvement

Deviation Management

framework establishes a reliable network of commitments throughout the organization and business enterprise, which each employee of and contributor to the research site knows and understands and to which everyone performs. Thereby, the organization or business establishes, maintains, and manages this network of commitments, which supports building quality into the clinical trial practices.8

Figure 2 An Outline of Clinical Research Phases Within a Quality Management System (QMS) Structure GCP Quality Management System “Building Quality Into the Clinical Program” Protocol Design & Operational Design

Study Planning

Study Start-Up

Investigational Product-Device

Data Lock & Analysis, CSR*

Marketing Application

Management Responsibilities Quality Culture, Policy, Objectives Resources Quality Commitment: All Staff

systems approach in clinical research is a

Process Performance—Systems, Processes, Documentation Quality-Performance Monitoring System (Quality Control/Quality Assurance) Corrective & Preventive Action (CAPA) System Change Management System-CQI, Document Control Management Review: Ongoing Acceptability of the System

QMS Elements

delivering quality care for those patients who

Study Close-Out

Good Clinical Practices

A quality management

further extension of

Study Conduct

Enabler

Knowledge Management

Enabler

Risk Management

volunteer to participate in clinical research. Inherent in a quality management system is documenting the work performed, evaluating deviations from the established quality standards and controls, and taking the necessary actions for immediate and continuous process improvement. This framework for quality is not so different from that used in hospitals and medical institutions, which must obtain and maintain accreditation of their facility, per country/state requirements. Therefore, a quality management systems approach in clinical research is a further extension of delivering quality care for those patients who volunteer to participate in clinical research. Table 1 presents a targeted description and application (examples) of critical aspects of both the quality management system framework and the plan, do, check, act principles.

*CSR=clinical study report.

execution of the clinical trial to the established quality standards. Specifically, quality metrics provide the ability to measure progress to the standards and goals defined by the organization. Quality metrics are reported as the “error rate,” and require predefined tolerance limits for effective monitoring, measurement, and reporting of quality and compliance signals to the organization. Note that what most organizations refer to as “key quality indicators” are a subset of performance indicators. For example, a site monitoring report may be completed on time; however, is the report completed

Figure 3  The Network of Interrelated Processes

Metrics: Reporting Performance of the Quality Management System When defined and used appropriately, metrics are an effective tool for monitoring and measuring the performance of an organization’s quality system and

correctly per the standards outlined in the monitoring visit report completion guidelines, monitoring visit report template, monitoring plan, and monitoring SOP? Did the CRA/monitor capture critical issues, protocol deviations, or violations in the monitoring report as identified by either database review of deviation listings or during an onsite quality assessment visit that was performed by his or her supervisor or the sponsor? Additional examples are described in Table 2. Upon identification of a metric that has either met or is out of range of the predetermined tolerance limits, the next

Network of interrelated processes with each process made up of:

People

Work

Activities, Tasks

Records, Resources, Documents, Rules, Forms Regulations

Reports, Materials

Supplies, Tools, Equipment

x

Peer Reviewed    31

Table 1  Descriptions and Examples of Quality Management System Elements Quality Management System Element Quality Culture ●● An organizational value system that results in an environment that is conducive to the establishment and continual improvement of quality ●● Maintain an awareness of quality as a key cultural issue ●● Make sure that there is plenty of evidence of management’s leadership ●● Empower employees and encourage self-development and self-initiative ●● Recognize and reward the behaviors that tend to nurture and maintain quality culture Management Commitment ●● Required to establish authority and commitment to the provision of resources to develop, maintain, and sustain the quality system

Quality Policy ●● Describes how an organization approaches quality and the requirements for meeting expectations.

Risk Management ●● Supportive framework (enabler) for the quality management system ●● A continuous, formal process involving the systematic application of documented management policies, procedures, and practices to the tasks of analyzing, evaluating, controlling, and communicating risk Knowledge Management ●● Supportive framework (enabler) for the quality management system ●● Prospective, controlled, and methodical approach for identifying/capturing and disseminating collective organizational expertise (past, present) for staff to perform their roles, duties, activities, and functions in clinical research to improve organizational performance ●● The process usually involves several of the following stages or subprocesses in the use of knowledge: create, identify, collect, organize, share, adapt, and use 9 Communication ●● A quality system’s requirements, standards, and effectiveness need to be communicated to everyone involved in a particular activity in order for the system to work

Example Incorporated into mission statement, employee on-boarding training program, and employee handbook ●● Executed at a division/department level in a manner that provides a direct, concrete, and succinct link between quality and the specialty areas of employees ●●

Quality policy document for the organization/business that is referred to and adhered to by the organization at all levels ●● Video on organization/business intranet communicating this commitment to all employees ●● Assignment of resources (people, funds, facilities, equipment) as required to develop, maintain, and continuously manage the quality system ●●

Guy’s and St. Thomas’ National Health Service (NHS) Foundation Trust Quality Policy ●● The Quality Policy describes the quality system that is required to ensure that clinical research conducted in NHS clinical research facilities in the U.K. fulfills statutory requirements laid down in current and any future regulations, as well as research governance guidelines. ●● The aim of the policy is to maintain a quality management process that not only meets the requirements of applicable regulations and guidelines, but also adds value to the reputation of the trust as a location where high-quality, robust research is conducted. Risk management principles may be applied to clinical trials by prospectively identifying those aspects that are critical to ensure the reliability of results (data quality, data integrity) and protection of study subjects ●● Protocol Risk Management Plan for global clinical trial execution ●● Study Recruitment Plan risk assessment and associated Risk Management Plan to ensure on-time subject enrollment ●●

Conduct lessons learned “during the clinical trial” and review and identify information that is important to “other teams” to improve performance in their job, on their protocol; includes real-time analysis and review of systems, processes, SOPs, and methods and implementation of changes in a timely manner with rapid communication to staff ●● ACRP’s Online Community, which includes sharing of best practices, lessons learned, SOPs; the end-user reviews the information and utilizes it as stated or modifies it for use (e.g., site SOPs and forms posted) ●●

●●

Policies, procedures, study requirements, and responsibilities should be communicated prospectively to affected staff, contract research organization (CRO) and service provider personnel, and clinical investigators; adequate training should be provided to all study staff, per their job functions (sites, sponsors, CROs, vendors, contractors, consultants) (continues)

x

32    Monitor August 2012

Table 1  Descriptions and Examples of Quality Management System Elements (continued) Quality Management System Element Job Responsibilities and Assessment of Personnel Competencies ●● Job descriptions are present and current, and there is a methodology for continuously reviewing and updating them according to changes in the regulatory landscape and other job responsibilities ●● Do personnel possess competencies, knowledge, experience, skills, and training to execute their assigned responsibilities, duties, functions, and activities? Document Control ●● Document control is a consistent method of controlling documents that includes version numbering, dating, issuing, and withdrawing as controlled procedures ●● Ensures that only the current version of the procedural document is used by all personnel Quality Plans ●● Documents specifying which procedures and associated resources shall be applied, by whom, and when to a specific project, product, process, or contract; describes how the quality system is applied to a specific deliverable Quality Standards ●● Organizational standards/requirements for conducting business ●● Regulations, guidances, guidelines, regulatory authority inspection manuals

Quality Control ●● A set of activities intended to ensure that quality requirements are actually being met

Monitoring and Measurement of the Quality System ●● Organization monitors customers’ perceptions of whether it has met their requirements ●● Suitable methods are utilized to monitor and measure the organization’s performance Facilities and Equipment ●● Controlled environments required to execute the protocol

Quality Assurance ●● The aspect of quality management that focuses on the confidence that quality requirements are fulfilled ●● Self-inspection audits of the quality system, processes, activities, and documents to independently assure that the defined requirements/standards for the protocol/system/ process/procedure have been adhered to ●● Conducted by defined, qualified personnel independent of the activity; performed in a systematic manner

Example ●●

●●

●● ●●

Each position has a current and documented job description, employee/contractor training plan, and training file

Procedural documents include SOPs, informed consent templates, and investigational product accountability logs, etc.

Protocol-Specific Quality Management Plan Monitoring Plan, Data Management Plan, Project Plan, Recruitment Plan, Quality Oversight Plan of Third Parties Plan, Data Monitoring Committee Charter, Data-Safety Monitoring Plan, Annual Audit Plan

Protocol-specific requirements for endpoint assessments (i.e., by an MD or certified assessor), and other study-related procedures/ activities ●● Each informed consent is obtained prior to any study-specific procedures performed on the subject ●● Protocol (e.g., subject enrollment criteria) ●● Trial-specific procedures and expectations (e.g., correct data entry per the source documentation into electronic data capture) ●● Predefined requirements (e.g., temperature at which investigational product must be stored) ●●

Organizational controlled, procedural documents (e.g., SOPs [unblinding, randomization], templates, forms, job aids [flow charts, reference cards, business operations manuals]) ●● Procedural documents include: SOPs, informed consent templates, investigational product accountability logs ●● Sponsor’s clinical research associate (CRA)/monitor perform onsite monitoring visits to a clinical investigator ●● Refrigerator and freezer temperature is routinely checked/monitored to ensure the temperature is within specified limits (requirements); this routine check/monitoring is documented in the temperature log ●●

Data analyses of performance (predefined/predetermined metrics and tolerance limits) ●● Customer report of product quality complaints ●● Internal audits of the quality system (see quality assurance section) ●●

●●

Clinical site’s refrigerators/freezers possess the protocolrequired temperature range with documented evidence of ongoing monitoring, calibration, and maintenance per manufacturer specifications

Organization’s routine audit of the quality system Clinical study report audit ●● Clinical investigator site audit ●● Quality systems audit of vendor/third party ●● Trial Master File audit ●● ●●

(continues)

x

Peer Reviewed    33

Table 1  Descriptions and Examples of Quality Management System Elements (continued) Quality Management System Element Deviation Management ●● Supports learning in the organization through the identification, recording, and investigation of activities that are not performed correctly or as planned, and provides the framework on how to investigate, plan, and change the way activities are performed Corrective and Preventive Action (CAPA) Program ●● Corrective action aims to address and manage identified areas of noncompliance and nonconformity (an issue or problem) by investigating the “root cause” in order to accurately eliminate it ●● Preventive actions aim to establish proactive methods/steps/ actions to foresee any issues and to prevent them from occurring Continuous Improvement ●● The organization shall continually improve the effectiveness of the quality system through the use of the quality policy, quality objectives, audit results, analysis of data, CAPA program, and management review ●● Build on the knowledge “known” and “learned” to make proactive improvements in individual trials and across all trials, and in the business enterprise (close correlation to knowledge management for the organization) Issue Escalation ●● The issue escalation process describes how the project identifies, tracks, and manages issues and action items that are generated throughout the project life cycle; it also defines how to escalate an issue to a higher level of management for resolution and how resolutions are documented ●● Unanticipated issues and action items are assigned to a specific person for action and are tracked to resolution Management Review of the Quality System’s Performance ●● Senior management’s overarching responsibility for review and analysis, at predetermined intervals, of the functioning/ adequacy of the quality system utilizing key indicators of performance, quality, and revenue ●● Has the quality system provided management with the information to reassure them of compliance to the organization’s quality system? ●● Through the assessment of metrics (performance, quality indicators), does the organization need to add anything or change anything in the quality system to meet organizational quality objectives?

step is the evaluation of the metric and any relevant companion metrics. This analysis supports a robust and comprehensive root cause analysis as to what the issue is, the issue’s impact, and what corresponding directed and focused CAPA steps and continuous improvement measures should be taken. Further, metrics need to be evaluated with the associated risk management plan and actions analyzed: Did our mitigation plans work? Should we implement our predefined contingency plans? Do we need to reevaluate the

x

34    Monitor August 2012

Example ●●

Protocol deviation log maintained by the site and by CRAs/ monitors

Incorrect version of the informed consent used for a study subject (corrective action: current version of consent in the file) ●● Subject’s investigational product dose not adjusted per the results of their liver or renal values, as required by the protocol (preventive action: checklist, per patient visit, outlining requirements) ●●

Use of audit findings, audit conclusions, analysis of data, management reviews, and deviation management to improve the quality system/organization ●● CAPA plans implemented as a result of audit conclusions, audit findings, internal monitoring of systems/processes and quality system by staff, CRA/site monitor feedback ●●

●●

●●

Cases of suspected scientific/ethical misconduct and/or fraud are escalated within 24 hours to the compliance/quality assurance department and senior management for investigation

Biannual meeting and review of the quality system utilizing metric reports and performance dashboards

assumptions reference in the development of the risk management plan and revise that plan accordingly? Is this an issue we did not expect, thereby necessitating the need for the development of a new risk management plan? Quality systems metrics analysis focuses on the following critical questions:10 1. How is the system performing? 2. Which aspects of the system are performing poorly or need improvement?

3. Which aspects of the system are performing adequately? 4. What is ideal performance? 5. Are the improvements having the desired effect? Figure 4 presents some useful guidelines for successfully using and evaluating quality systems metrics.10

Summary Effective execution of clinical trials featuring “quality built within” requires

Table 2  Examples of How Quality Indicators are Subsets of Performance Indicators Performance Indicator Case report forms (CRFs) completed and submitted on time > 90%

Quality Indicator CRF query rate < 5%

Imaging study completed on time >95%

Image readability > 98%

Blood samples collected on time > 95%

Quantity not sufficient < 1%

Staff trained on SOPs prior to performing responsibilities/tasks in SOP > 95%

SOP deviation rate < 3%

Staff trained to the protocol-investigational plan prior to performing delegated study tasks > 95%

Protocol deviations-violations < 2%

Site staff delegated tasks prior to study-start > 95%

Staff delegated responsibilities correctly per licensure and/or certification requirements per state/country 100%

Subjects enrolled on time > 85%

Subjects enrolled meet enrollment criteria 100%

Informed consent obtained 100%

Subjects consented with the correct ICF version 100%

Informed consent obtained 100%

Subjects consented prior to study-specific procedures 100%

a prospective, systematic approach to quality through the implementation of a robust and comprehensive quality management system whereby metrics provide the ability to measure an organization’s performance. The qual-

ity management system principles described in this article represent the framework and components of progressive 21st century practices for use by all stakeholders in the clinical research enterprise.

Figure 4  Guidelines for Using and Evaluating Quality Systems Metrics Successfully Using Quality Systems Metrics ●● Identify the stakeholders and the metrics for their activities ●● Determine the metrics required for periodic management review of the quality system ●● Determine how the metrics will change the way you perform your business ●● Select the right metrics and rationalize the calculation rules (do you have this requisite expertise inhouse?) ●● Determine how you will use the results ●● Define the reporting mechanisms (scorecard, dashboard, real-time reports) and issue escalation pathways both internally and externally with other parties ●● Determine the right source of the data ●● Collect the data and validate the results ●● Continuously evaluate the metrics (obtain feedback and study the utility of your measurement) ●● Continuously improve the process ●● Communicate the results Successfully Evaluating Quality Systems Metrics ●● What does this metric measurement mean? ●● What will I do with this information? ●● How will this communicate a “quality performance” threshold or tolerance limit requiring investigation or evaluation? ●● Do I need another metric to get the “whole picture”? ●● Identify any “companion metrics” to assist with the evaluation

References 1. Clinical Trials Transformation Initiative. Available at https://www.trialstransforma tion.org/scope. 2. Sweeney F. 2011. Defining Quality in Clinical Trials. DIA Annual Meeting, 2011. 3. Cameron K, Since W. 1999. A framework for organizational quality culture. Quality Management Journal, 1999, pp. 7–25. 4. BARQA. 2010. Quality Systems Workbook. Available at www.barqa.org (free download available). 5. International Organization for Standardization; www.iso.org. 6. Kleppinger C, Ball L. 2010. Building quality into clinical trials with use of a quality systems approach. Clinical Infectious Disease Journal 51(Supp. 1): S111-S116. 7. ICH Q10 Pharmaceutical Quality System presentation, www.ich.org/products/guidelines /quality/quality-single/article/pharmaceutical -quality-system.html. 8. Burrow D. CDER BIMO Warning Letters as Case Studies-Building Quality in Clinical Trials. Presentation, ACRP Global Conference, 2012. 9. American Productivity and Quality Center Publication. 2000. Stages of Implementation: A Guide for Your Journey to Knowledge Management Best Practices. Houston, Texas. 10. Zuckerman D. 2006. Pharmaceutical R & D Metrics. Gower Publishing Ltd., England.

Additional Sources Cianfrani C, Tsikais J, West J. 2009. ISO 9001: 2008, Explained. Milwaukee, Wis.: ASQ Quality Press. Clinical Trials Transformation Initiative. Developing Effective Quality Systems in Clinical Trials: An Enlightened Approach; www.ctti.org. ISO 9001:2008. Quality Management Systems— Requirements; www.iso.org. Ribière V, Khorramshahgol R. 2004. Integrating total quality management and knowledge management. Journal of Management Systems 16(1). Toth-Allen J. 2012. Building Quality into Clinical Trials: An FDA Perspective. FDA Webinar, 04 May 2012. Tricker R. 2010. ISO 9001:2008 for Small Businesses. Burlington, Mass.: Elsevier.  Liz Wool, RN, BSN, CCRA, CMT, has 22 years of experience in the clinical research industry. She is president and CEO of QD-Quality and Training Solutions, Inc. (QD-QTS), a clinical quality systems, training, auditing, and CRO-vendor oversight consulting firm providing services to institutions, investigators, sponsors, and CROs. QD-QTS has offices in San Bruno, Calif., and Franklin, Tenn. A certified Master Trainer and instructional designer, she is also a member of ACRP’s Association Board of Trustees and Editorial Advisory Board. She can be reached at [email protected].

x

Peer Reviewed    35

C l i n i c a l

t r i a l s

Peer Reviewed

Metrics in Medical Imaging Changing the Picture

i n M e t r i c s P e r f o r m a n c e

Hui Jing Yu, PhD | Colin G. Miller, PhD | Dawn Flitcraft

M

To be confident about making decisions based on medical images for individuals and for clinical trials, medical professionals can use metrics to develop adequate assurance that the images were appropriately acquired and analyzed.

  Home Study article Learning Objective

After reading this article, participants should be able to describe how an imaging core lab partners with sponsors to use metrics to ensure the collection of quality imaging endpoint data for clinical research studies. Disclosures

Hui Jing Yu, PhD, Colin G. Miller, PhD, and Dawn Flitcraft are employees of and stockholders in BioClinica, Inc.

x

36    Monitor August 2012

edical images such as X-rays, computed tomography (CT) images, positron emission tomography (PET) images, dual energy X-ray absorptiometry (DXA), and magnetic resonance images (MRIs) are essential tools for diagnosing and monitoring diseases and directing treatments. The medical decisions based on these images are vitally important for individual patients and clinical trials as a whole. To be confident about making these decisions, radiologists and other medical professionals must have adequate assurance that the images were appropriately acquired and analyzed. Medical imaging plays a growing role in clinical trials due to increased use of technology and improved computing power.1 In clinical trials, medical imaging is used primarily to evaluate efficacy endpoints, and, more and more frequently, for safety evaluations and/or eligibility criteria.

Background In multicenter clinical trials, the images will be obtained at multiple clinical sites, each with its own standard operating procedures, technologists, procedural protocols, and equipment. The experience of the technologists, the customization of each protocol, and the make and models of the equipment used may vary significantly from one site to another. Additionally, many of these trials take place over periods of time ranging from weeks to years, during which changes of personnel and equipment often occur. Image quality control is required to minimize both inter- and intra-site data variance and to ensure delivery of more precise results. An imaging core lab (ICL) offers a full suite of medical image management solutions for the lifecycle of a trial and for a wide range of imaging modalities. Table 1 lists typical services provided by ICLs. The goal of an ICL is to unify all the essential image data in a standardized format, in order to expedite the central review of the images and data export2–4 (see Figure 1 on generic imaging workflow). A method to track and uphold rigorous standards, as related to high image quality in a clinical trial context, is required to ensure the endpoints are clearly met. The use of imaging performance metrics to monitor image quality—so that the targets assigned to each metric are met—has therefore allowed appropriate levels of control for both the ICL and sponsors, thereby enhancing trial performance and quality. Essentially, there are two major types of imaging from a quality control (QC) viewpoint:

Table 1  Comparison of Image Core Lab Services Study Initiation and Startup Identify expert readers and consultants ●● Assign project team ●● Engage study startup team ●● Design imaging protocol ●● Communication plans ●● Project-specific work instructions ●● Develop imaging review charter ●● Deploy site surveys ●● Attend investigator meetings ●● Provide imaging study kits ●● Perform site visits ●● Conduct web-based training ●●

Collection Management Collect image data Query sites for missing data ●● Translate/digitize image data ●● Image quality assurance/ quality control ●● Image data query resolution ●● Archive for long-term storage

Independent Review Analyze images Design independent read system ●● Develop imaging review charter ●● Provide reader training ●● Conduct independent read ●● Monitor independent read ●● Monitor inter-/intrareader variability ●● Export data

●●

●●

●●

●●

Figure 1  Imaging Core Lab Process Workflow Readers

Readers Hospital or Investigator Site Takes Image and Sends to Core Lab

Core Lab Reviews, Archives, and Makes Images Available to Readers

●● two-dimensional

(2-D) (e.g., plain film X-ray, DXA, and ultrasound) and ●● three-dimensional (3-D) or tomographic techniques (e.g., CT, MRI, and PET). The QC for 2-D imaging is more critical on positioning, since slight rotation or incorrect positioning may hide important anatomic features. The 3-D techniques tend to need more QC on the acquisition settings and review of patient motion, since the acquisition times are longer. Image QC primarily consists of checks on correct positioning, complete anatomical positioning, lack of patient motion, and a check on the correct acquisition or instrument settings, such as the scan mode (e.g., T1 or T2, etc.) in MRI or scan thickness and coning in CT.

Image Quality Metrics Within the lifecycle of an imaging trial, trial performance can be tracked using four types of metrics: cycle time, timeliness, quality, and efficiency/cost (Figure 2 and Table 2). Quality metrics

can be further tracked as image quality (metrics as determined by reader or independent reviewer, although images are checked for quality at the technologists’ level before sending to the reader), image queries sent to sites, missing imaging visits, and adherence to acquisition protocol. A key first step to ensure that highquality imaging endpoint data are collected for studies is to have standardization of image acquisition between sites. This can usually be accomplished by providing training to each site via a group location, telephone, Web conference, etc. Occasionally, site visits (visits to educate the technologists) are performed if the protocol is deemed to be more challenging than the standard-of-care procedure, or if the sites are not adhering to the imaging guidelines. Imaging guidelines are provided to the site simply to communicate and document the image-related expectations and requirements for a trial. On an ongoing basis, data arrive at the ICL and are inspected for image quality, usually by radiological technologists, prior being sent for the radiological evaluation or read. The reader can then determine the presence or absence of necessary imaging and the associated image quality. Image quality metrics can be calculated based on the percentage of images that are readable (evaluable), suboptimal (readable but not optimal),

Figure 2  Imaging Performance Flowchart

Site Startup

Review Completed and Report Delivered

Imaging Guidelines Sent to Site

Image Acquired at Site

Independent Review Process

Site Query Resolution

QC Process

Image Sent to Core Lab

Image Received at Core Lab

Queries to Sites

x

Peer Reviewed    37

Table 2  Imaging Core Lab Performance Metrics Metric

Category

Metric Title

Unit of Measure

Reporting Frequency

Target

1

Project Startup

Average number of days study kit sent

Turnaround time - Days

Monthly

3

2

Project Startup

Completion of site qualification/training

Percentage (%)

Monthly

95%

3

Project Startup

Independent review charter

Turnaround time - Days

Monthly

5 days from receipt of latest draft/final protocol

4a

Image Acquisition

Average number of days from image time point acquisition to receipt

Turnaround time - Days

Monthly

3 (electronic transfer)

4b

Image Acquisition

Average number of days from image time point acquisition to receipt

Turnaround time - Days

Monthly

7 (traditional transfer)

5a

Image Acquisition

Average number of days from image receipt to initial feedback sent to site

Turnaround time Hours (eligibility/safety)

Monthly

24 hours

5b

Image Acquisition

Average number of days from image receipt to initial feedback sent to site

Turnaround time - Days (standard study)

Monthly

3 days

6

Image Processing

Average number of days from image receipt to ready for independent review

Turnaround time - Days (standard study)

Monthly

3 days

7

Image Processing

Average number of days from when the image is designated for review to completion of the review, excluding images which have outstanding queries

Turnaround time - Days (standard study)

Monthly

Variable

8

Quality

Percentage of non-evaluable images vs. total images received

Percentage (%)

Monthly

≤ 3%

9

Quality

Percentage of non-evaluable/missing baseline images

Percentage (%)

Monthly

≤ 2%

10

Quality

Quality of data export

Percentage (%)

Monthly

99%

11

On-Time Delivery

On-time delivery of read report(s)

Percentage (%)

Monthly

98%

12

On-Time Delivery

On-time delivery of data export(s)

Percentage (%)

Monthly

98%

13

On-Time Delivery

On-time delivery of FINAL data export

Percentage (%)

Monthly

99.9%

14

Image Queries

Percentage of Queries

Percentage (%)

Monthly

< 10%

15

Image Queries

Average number of days queries outstanding

Turnaround time - Days

Monthly

7

or not readable both by the technologist and the readers. This can obviously be evaluated on the study level, but also on country- and site-specific levels. If there are issues, a query can be generated and sent to the site for immediate resolution. The percentage of site queries is a performance metric that captures rate of issue as an indication of whether or not the site training addressed the necessary key points for acquisition and how closely the protocol was being followed. When a query is unresolved, or imaging cannot or could not be per-

x

38    Monitor August 2012

formed, or protocol is not followed, the result is missing imaging data for either the baseline visits or nonbaseline visits. Such metrics can be defined and tracked throughout the trial, allowing for early escalation of potential site performance or study protocol design issues. Lastly, the number of image acquisition technique–related amendments, upon the agreement between the ICL and sponsors, could be incorporated as a metric as an indirect measurement of image quality (e.g., the greater the number of amendments, the lesser the robustness of acquisition protocol and quality).

Two Case Studies What follows are two case studies presented as examples of how ICLs use metrics to ensure that high-quality imaging data are collecting for studies (see Table 3 for a summary). Case 1

Many ICLs have started to include imaging performance metrics as part of their standard reports. Implementing a set of standardized metrics can allow the early escalation of potential core lab or site performance issues that require immediate remediation and identification of any need to retrain

Table 3  Case Study and Metrics Summary Case Study

Category

Metric Title

1

Image Queries

Percentage of site queries

< 15%

2

Image Quality

Percentage of non-evaluable baseline images

< 2%

2

Image Quality

Percentage of non-evaluable images (non-baseline)

< 2%

sites or redesign image acquisition guidelines. One example of this approach is the case where a client considered an X-ray procedure to be so simple and straightforward that site training was not requested, and it was assumed that a paper manual would suffice.5 Unfortunately, that decision resulted in a data clarification form (DCF) rate (site queries) of approximately 75%, which caused both a lack of precision and loss of time due to requests for repeat procedures, and directly translated into poor data and increased costs. The sponsor quickly produced a CDbased training program6 for this study, including a test to ensure understanding of the material, and requested that the sites have the appropriate personnel take the program. The result after sites completed the CD training was a 90% decrease in the DCF rate to less than 7%. This training format provides an excellent, cost-effective way to ensure protocol compliance while improving the precision of study data. This, in turn, either improves overall statistics and/or shortens the time required to detect sigFigure 3 Case 1 Exhibit: Post-Training Result in Reduction of Site Queries 90%

75%

60%

90% reduction in site queries

30% 7% 0%

No Training

Post Training

Target

nificant change, thus reducing overall cost for sponsors. Case 2

The integration of the ICL as part of clinical trials in all therapeutic areas where medical images are collected is particularly important to harmonize data quality across sites. For example, in an oncology trial involving more than 30 countries and 200 sites across the globe, it was challenging to obtain high-quality, standardized data due to varying technical capabilities at hospitals and imaging facilities. Other sources of variations included study duration and the imaging modalities involved. In this case, the study lasted for six years, with CT and MRI data being collected for all time points. At screening (baseline), all subjects were required to have a bone scan (nuclear medicine image), which was to be repeated at follow-up time points if disease was present at baseline or if clinically indicated. Because an ICL was involved in the study, the sites received standardized instructions (image guidelines) at the start of the study and, for the most part, the imaging quality was comparable across sites. However, many imaging protocols, including the one for this study, provide high-level requirements for imaging and do not include the necessary level of detail that is realistically needed. In this study, 95% of total data submitted for the study was digital, with only 5% of film data submission. Of the film data submitted, 90% were bone scans, because it was challenging for some sites to provide technically adequate digital images in the correct format. Instead, images submitted digitally

were in JPEG format with improper leveling and windowing, resulting in images with a lack of details. Considering that bone scans were required for all subjects at screening and these images were used to determine subject progression, the problems encountered with film data could have resulted in a much lower than ideal rate of readable images for follow-up imaging time points. However, through tracking of the relevant metrics, the ICL was able identify these issues early and, together with the sponsor, worked with the regional monitors and sites to find locations where subjects could receive bone scans that were acceptable and usable for the study. Furthermore, the ICL provided extra training to optimize contrast for screenshot images. Such implementation resulted in excellent submission turnouts as measured by image quality metrics. Out of 789 baseline time points, only one was not readable (i.e., 99.87% baseline images readable), and out of 4,810 follow-up time points, only 40 were not readable (i.e., 99.17% non-baseline images readable). Most of the scans that were considered not readable were caused by missing anatomy. Thus, if the ICL is not involved at the start and standardized guidelines are not provided, then studies can run into data quality issues that might otherwise be avoided.

Electronic Image Submission Overall, submission of images via electronic means reduces the transit time from the site by greater than 80% from traditional means (courier). This is achieved by avoiding customs involvement when moving the package in and out of countries, as well as any other human or weather involvement, which could delay the shipment from the site to the ICL. Electronic submission is the quickest way to submit images to the ICL and enables the site to mask the image data

x

Peer Reviewed    39

Figure 4 Case 2 Exhibit: Good Quality Bone Scan (Left Set) vs. Poor Quality Bone Scan (Right Set)—Lesions Not Visible on Poor Quality Scan

2. Miller CG, Noever K. 2003. Taking care of your subject’s image: the role of medical imaging core laboratories. Good Clinical Practices Journal 10(9). 3. Miller CG. 2005. Medical imaging core laboratories. Applied Clinical Trials October 2005. 4. Miller C, Noever K. 2003. Scanning data for clinical information. Good Clinical Practices Journal 10(9): 21–4. 5. Pearson D, Miller CG. 2007. Clinical Trials in Osteoporosis. Springer Verlag. 6. Miller CG, Wertz K. 2007. Education alternatives for imaging techniques in clinical trials. World Pharma Network July: 42–3. 7. van Meurs P, Miller CG. 2011. Image conscious: new FDA guidance. Samedan Ltd December: 24–8.  Hui Jing Yu, PhD, is a medical affairs scientist at BioClinica, Inc., where she provides scientific and medical support to the pharmaceutical industry on the use of imaging biomarkers in clinical trials. She also provides support to internal business development, marketing, and operations teams. She holds a bachelor’s degree in biomedical and electrical engineering, an MSc degree focused on physiology and biophysics research, and a PhD in biomedical engineering from Stony Brook University in New York. She has written and coauthored several scientific publications and, as the primary author, she drafted and reviewed this manuscript. She can be reached at [email protected].

before transmitting to the ICL. Also, electronic submission is spotlighted in the training materials and at the investigator meetings because this is a great solution to a constant challenge for all clinical trials. Even with minimal setup required for electronic image transmission programs, some sites continue to send image data via courier. This may be out of habit because firewalls and other technology hurdles are typically not an issue at the sites. The ICLs should work very closely with the CRO to encourage sites to use electronic image transmission programs.

Conclusion The use of medical imaging in clinical trials has developed from the early days of passively collecting images and having them evaluated on light (film) boxes by radiologists. Improvements in the related technology over time have greatly increased the ability of medical experts to use imaging as a critical biomarker, whether for eligibility, safety,

x

40    Monitor August 2012

or efficacy. The practice now stands as its own major scientific pursuit as well as a focus for operational logistics management.7 The critical use of metrics has helped empower this progress; metrics in their own right are of little value, unless they can effect change to a process. The examples provided here have demonstrated the value of metrics for contributing to the ICL operational capabilities and, ultimately, for providing improved study outcomes through decreased variability in data, leading to greater statistical confidence in study findings. Greater statistical confidence will ultimately lead to a decreased number of patients in future trials. Finally, the ethical and financial implications of using appropriate metrics should not be underestimated.

References 1. Reiber H, van Kuijk C, Schwarz L. 2005. Medical imaging and its use in clinical trials. European Pharmaceutical Contractor Autumn 2005: 80–4.

Colin G. Miller, PhD, is senior vice president for medical affairs at BioClinica, Inc., where he is responsible for medical and scientific consulting. He joined BioClinica (formerly Bio-Imaging Technologies, Inc.) in 1999 as vice president of business development. He has also served as director of clinical services at Bona Fide (a company he started in 1994 that was later acquired by Bio-Imaging Technologies, Inc.); and as the head of the physical measurements team for Europe at Procter & Gamble Pharmaceuticals. A fellow of the Institute of Clinical Research, he also is an associate member of the Radiological Society of North America, a member of the American Society of Bone and Mineral Research, and a member of the Metrics Champion Consortium. He has written and coauthored more than 40 scientific publications. He received his bachelor’s degree in physiology and zoology from the University of Sheffield and a PhD from the University of Hull, both in the U.K. As a coauthor, he critically reviewed this manuscript. Dawn Flitcraft is senior vice president for client services at BioClinica, Inc., where she oversees the project management, imaging core lab, and clinical operations departments of the Medical Imaging Solutions Division and is responsible for overall client relations. She joined BioClinica as director of project management when it acquired Quintiles Intelligent Imaging in 2001. She held several positions at Quintiles, including image processing specialist, senior manager, and finally director of clinical research and development. She holds a bachelor’s degree in biology and nuclear medicine from Cedar Crest College in Allentown, Pa.; has certifications from the Nuclear Medicine Technology Certification Board, the American Registry of Radiologic Technologists (Nuclear), and the American Registry for Diagnostic Medical Sonography; and is a member of the Steering Committee for the Metrics Champion Consortium. As a coauthor, she critically reviewed this manuscript.

Henry J. Durivage, PharmD | Srini Kalluri, BS

P e r f o r m a n c e

Peer Reviewed

M e t r i c s

A Case for Site-Centric Operational Metrics

T

cancer centers illustrate the advantages of sitemetrics.

t r i a l s

centric operational

C l i n i c a l

from a collaboration of

i n

Three case studies

here is a growing consensus in the clinical research industry that performance metrics are fundamental to the implementation of continuous improvement strategies for higher quality and efficiency at all stages of the clinical trial lifecycle.1–7 Toward this end, a growing volume of metrics is being collected about sites and the clinical trials in which they are participating at the request of sponsors and contract research organizations (CROs). The goal of this data collection is process improvement through such strategies as data-driven site selection. With all this attention being paid to site scoring and clinical trial performance, however, sponsors and CROs continue to be frustrated by the lack of timely data that truly reflect the status of trials or give insight into study conduct at the site. Sponsors/CROs are not getting the data they need because they determine the metrics to be requested instead of the sites, and these metrics typically provide no direct benefit to the sites. Further, the data requested may not be readily available and/or require additional effort to collect outside the normal workflow of study conduct. This article discusses three case studies from a collaboration of cancer centers to illustrate how site-centric operational metrics are more timely, take less effort to collect, are more actionable, and better motivate the right behaviors. Furthermore, when sites work together to share their combined experience, aggregated site-centric metrics provide benchmarks for comparison between centers and an opportunity for collective learning. The case studies are drawn from a group of academic research organizations, cancer centers, and research hospitals, all of which take advantage of a collaborative environment. Further, they all use a common clinical trial management system (CTMS) in order to cooperate and collect informative data on their clinical research operations. This group is called Onsemble.8 For more than four years, work within Onsemble on developing metrics for the measurement and improvement of clinical research operations has been headed by the lead author of this article and Kerry Bridges, MBA, RN, CCRC, administrator of the Melvin and Bren Simon Cancer Center at Indiana University. Initial projects have drawn upon the participation of 16 cancer centers that cooperated to collect and analyze aggregate data related to clinical trial performance metrics and resource allocation.

x

Peer Reviewed    41

Measure the Work Where and When it is Being Conducted Currently, many CROs and sponsors collect metrics around site performance for their own use, often requiring the sites to provide much of the information being collected. In this environment, site staff already challenged by time and resource limitations are being asked to enter arbitrary operational data into sponsor and CRO systems. In a site-centric model, sites reap the rewards of collecting their own metrics for the purpose of measuring and improving their internal processes. Sites benefit from taking the time to develop metrics that reflect their business goals and give an indication of the status of their daily operations in everything from process efficiency to financial health. Successful sites build strategic planning, tenacious execution, and rigorous feedback into a solid strategy for continuous process improvement. They use these metrics to promote their strengths and make improvements targeting their weaknesses.

Sponsors/CROs are not getting the data they need because they determine the metrics to be requested instead of the sites, and these metrics typically provide no direct benefit to the sites. One center identified issues with slow accruing investigator-initiated trials. This observation led to an analysis of data from eight centers evaluating reasons why some studies are completed and others are not. As presented at a recent Association of American Cancer Institutes Clinical Research

x

42    Monitor August 2012

Initiative meeting, low accrual three months and six months after study start was a highly significant predictor of trials that failed to meet their primary scientific aim.9

cooperative oncology group trials was far below the median of the 14 other centers. With this information in hand, the center was able to identify problems and implement corrective actions.

Start with Data on Hand

Multisite Aggregate Metrics Benefit all Participants

The best motivation for getting started with metrics is to determine what the current challenges are compared to the site’s business goals. After choosing a few key challenges, the site should try to determine what the current assumptions are about each challenge. Site leadership should ask: “What do we know, or think we know, about the current process?” and “What can we measure today?” At the beginning, sites will have the most success if they focus on data that are already within their grasp; that is, by leveraging the data that are already being recorded as part of the daily and natural workflow of the site. Once a continuous improvement strategy based on metrics has been implemented, a long-term strategy will likely include additional data points; but to start, data that are easy to collect are more likely to be obtained. For example, all cancer centers must obtain approval by a cancer center scientific review committee before institutional review board (IRB) submission. All centers know key milestones in the protocol initiation process, including date of scientific review approval, date of IRB review, date of IRB approval, and date of first patient enrolled. These simple metrics—when analyzed by trial sponsor, disease categories, etc., and compared to other centers—can yield actionable results. This level of granularity can be evaluated only when the dataset is large enough to make meaningful comparisons. For example, in the Onsemble group, the median time to activation was determined for each site and each sponsor type. One center discovered that its performance in activating

Measuring processes at an individual site over time can be extremely helpful in support of continuous improvement initiatives. Additionally, the ability to compare a site’s performance to the average performance of many sites can be highly illustrative and actionable. Having good data about a site’s peer community, as described above, is an efficient way to get an initial benchmark for target performance.

In a site-centric model, sites reap the rewards of collecting their own metrics for the purpose of measuring and improving their internal processes. Strategic use of metrics can also be a boon to business development goals. Comparison of a single site against the average performance across many sites need not be all about teasing out areas for improvement. Identifying areas where a site has a very strong competitive advantage can be highly beneficial for securing additional trials and for eliminating wasted effort. For example, a site that has a proven track record in successfully accruing to geriatric trials should advertise this fact when wooing sponsors of trials with this target subject community. Likewise, this site should probably decline participation in pediatric trials. Additionally, when the participating sites are willing to share aspects of their

strategy with each other, the sites with significantly better than average performance may be convinced to share the strategies that have delivered them positive results. When eight cancer centers compared the completion rate of their portfolios of Phase II investigatorinitiated trials, two centers had clearly superior results (see Figure 1).11

Figure 1 Enrollment of Subjects to Completed Studies Versus Studies Closed due to Inadequate (“Slow”) Enrollment Slow accruing Completed Total

9% of enrollment to slow accruing studies

Methodology Taking advantage of the benefits of a collaborating community requires having agreement on the definition of data points. Much up-front work must be done to harmonize data definitions across the participants, but the rewards are great. The fact that the participating Onsemble cancer centers all used the same CTMS did provide some advantage for the purposes of collecting data and gaining consensus on data definitions. Each site had at its disposal identical fields with relatively minor discrepancies in how the various fields were used at each center. After coming to a consensus on a select number of fields, all of the participating centers cleaned up their own databases and filled in missing data.

The ability to compare a site’s performance to the average performance of many sites can be highly illustrative and actionable. One field that the group wanted to capture was labeled the “Study Closure Reason.” Unfortunately, this was a field in which data were entered as free text, leading to a great deal of variability in the resulting data across the participating centers. As a group, the centers came to a consensus on how to

289

7% of enrollment to slow accruing studies 964 813

The Rest: 29% of total enrollment to slow accruing studies 1,267

2,028

587

328 67

30 Center 2

Center 7

codify and categorize reasons for study closures. Each center then cleaned up this field so that the aggregate data set would be able to make apples-toapples comparisons on this data point. In the interest of protecting sensitive data or data that might be considered of competitive advantage to participating centers, the vendor of the CTMS acted as the trusted third party for the purpose of collecting, aggregating, and blinding the operational data.

The Rest

operational performance in the area of protocol performance over two separate periods of time.11 Initially, a fouryear period was evaluated regarding the performance of industry and cooperative oncology group trials. In this study of approximately 3,000 clinical trials and approximately 9,000 subjects, about 20% of the studies enrolled 90% of the subjects. With this information in hand, several centers implemented protocol development guidelines.

Responding to the Results

Much up-front work

One Onsemble metrics project involved data collected through 2009 by eight participating cancer centers focused on protocol performance of Phase II investigator-initiated trials.10 The group compared the completed trials versus discontinued trials within their portfolios. It was obvious very quickly that two of the centers had significantly higher rates of completed trials. At the next face-to-face gathering of the participants, these two sites presented their insights into what their centers were doing that might be contributing to their success in this area. Using metrics to influence process and policy changes at the site level can be seen in the results achieved by a group of 16 cancer centers who came together to measure and evaluate their

must be done to harmonize data definitions across the participants, but the rewards are great. For example, two centers required investigators to demonstrate that they would evaluate at least 12 patients who would fit eligibility criteria during the time the trial was anticipated to be open to enrollment. Re-evaluation of 14 of the centers over a subsequent two-year period showed a marked positive difference in the centers that implemented protocol development guidelines.

x

Peer Reviewed    43

Furthermore, the resources used on underperforming clinical trials were evaluated in each project and, to no real surprise, it was found that significant effort was used to support trials that were of little value to the institution or sponsors. Some centers made enhancements to their closure policies in order to limit the time and effort wasted on trials that are unlikely to be successfully completed due to inadequate accrual. In 2010, Ellen Graves Wojcik, MBA, CCRP, from the Winship Cancer Institute at Emory University, gave a presentation on the impact of newly implemented closure policies at that center.12 Changes implemented included stricter guidelines related to accrual, such as requiring new trials to accrue at least 25% of their target accrual rate during the first six months or be subject to closure. After just one year of study, the institute was able to demonstrate positive and measurable changes brought on by the implementation of stricter policies (see Figure 2).

Conclusion Using a very pragmatic approach to measuring clinical research operational performance, members of the Onsemble metrics collaborations have demonstrated on multiple occasions that

site-centric performance metrics can be implemented, using tools that are already part of the daily workflow of the sites, to provide demonstrable results in a relatively short timeframe and with greater efficacy than the top-down approach currently employed by many sponsors and CROs. An industry full of self-aware sites results in improved clinical research operations where the work is being conducted, along with a rich pool of operational metrics from which to draw.

An industry full of selfaware sites results in improved clinical research operations where the work is being conducted, along with a rich pool of operational metrics from which to draw. Additionally, the collaborative approach employed by the participants provided greater benefits to each participant than could be accomplished alone, in the form of persuasive datasets and opportunities for collective learning.

References Figure 2 Comparison of Trial Performance Related to Accrual Following the Implementation of New Closure Policies at Winship Cancer Institute at Emory University12 25% 20% 15% 10% 5% 0%

x

2008

2009

44    Monitor August 2012

 1. Schroen AT, Petroni GR, Wang H, Gray R, Wang XF, Cronin W, Sargent DJ, Benedetti J, Wickerham DL, Djulbegovic B, Slingluff CL Jr. 2010. Preliminary evaluation of factors associated with premature trial closure and feasibility of accrual benchmarks in Phase III oncology trials. Clinical Trials 7(4): 312–21.   2. Cheng SK, Dietrich MS, Dilts DM. 2011. Predicting accrual achievement: monitoring accrual milestones of NCI-CTEP-sponsored clinical trials. Clinical Cancer Research 17(7): 1947–55.   3. Korn EL, Freidlin B, Mooney M, Abrams JS. 2010. Accrual experience of National Cancer Institute Cooperative Group Phase III trials activated from 2000 to 2007. Journal of Clinical Oncology 28(35): 5197–201.   4. Baer AR, Bridges KD, O’Dwyer M, Ostroff J, Yasko J. 2010. Clinical research site infrastructure and efficiency. Journal of Oncology Practice 6(5): 249–52.

 5. Cheng SK, Dietrich MS, Dilts DM. 2010. A sense of urgency: evaluating the link between clinical trial development time and the accrual performance of cancer therapy evaluation program (NCI-CTEP) sponsored studies. Clinical Cancer Research 16(22): 5557–63.   6. Kossman S, Hsieh Y, Peace J, Valdez R, Severtson L, Burke L, Brennan PF. 2009. A theorybased problem-solving approach to recruitment challenges in a large randomized field trial. Applied Nursing Research Sep 17.   7. Barnard KD, Dent L, Cook A. 2010. A systematic review of models to predict recruitment to multicentre clinical trials. BMC Medical Research Methodology 10: 63.   8. See www.onsemble.net for more information on the Onsemble community.   9. Durivage HJ, Bridges KD, Yao X, Li F-Y, Sauers J, Wellons M, Baker L. 2011. Protocol Performance Metrics and Resource Utilization of Phase II Investigator-Initiated Trials. Feature presentation at the Association of American Cancer Institutes (AACI) Clinical Research Initiative (CRI) meeting, July 2011. 10. Durivage HJ, Bridges KD. 2009. Clinical trial metrics: protocol performance and resource utilization from 14 cancer centers. Journal of Clinical Oncology 27: 337S, suppl.; abstr. 6557. 11. Durivage HJ, Bridges KD, Sauer J, Baker L, Wellons M. 2010. Protocol performance and resource utilization of Phase II investigatorinitiated trials. Journal of Clinical Oncology 28: 463S, suppl; abstr. 6066. 12. Wojcik, EG. 2010. Low-accruing trials: everevolving closure policies at the Winship Cancer Institute. Presentation at the Onsemble 2010 Spring Conference. Newsletter article based on this presentation available at: www.onsemble.net/index.php?option=com _content&view=article&id=180:low-accruing -trials-ever-evolving-closure-policies-at-thewinship-cancer-institute&catid=85:spring -2010&Itemid=79.  Henry J. Durivage, PharmD, joined the Yale School of Medicine in 2010 as director of the Yale Cancer Center Clinical Trials Office and an associate director in the Yale Center for Clinical Investigation to spearhead Yale’s enterprise-wide implementation of a new clinical research management system and as Yale Cancer Center’s regulatory expert. He was recently elected to the Steering Committee of the Association of American Cancer Institutes Clinical Research Initiative and to the Audit Committee of the Eastern Cooperative Oncology Group. As coauthor of this article, he has been a driving force in the referenced case study projects since the inception of the Onsemble metrics collaborative initiative in 2007. He can be reached at [email protected]. Srini Kalluri, BS, founded the clinical research management software development company, Forte Research Systems, Inc., in 2000. Today, he continues to drive the company’s cultural and strategic direction as CEO and chief customer officer. As coauthor of this article, he has an ongoing collaborative relationship with the centers that contributed data, and his company provided infrastructure for collaboration and acted as the trusted third party for capturing and aggregating the metrics data. He can be reached at [email protected].

April Davis, MS

P e r f o r m a n c e

Peer Reviewed

Predictive Analytics

M e t r i c s

A Nonstatistical Perspective as Related to Executing Effective Clinical Trials

A

use of predictive analytics as a means trial performance data to accelerate or mitigate events in drug development programs.

t r i a l s

to leverage clinical

C l i n i c a l

the high-level

i n

This article explores

s has been exhaustively documented in commercial and scientific literature, pharmaceutical drug pipelines are in rapid decline, thus intensifying the acute global financial demand for streamlining existing clinical trial processes and technology. Simply stated, increasing revenue and market share is a hyper-focus in the biopharmaceutical industry today. Therefore, predicting the outcome of a drug development program becomes mission critical. Traditionally, predictive modeling techniques are leveraged for statistical planning, for example, in drug supply and investigator or patient recruitment areas. With a combination of technology and clinical trial performance data, predictive analytics can be an effective approach to generating leading indicators of a drug program’s outcome, thus accelerating decisionmaking, abating unnecessary costs, and ensuring quicker time to market. This article explores the high-level use of predictive analytics as a means to leverage clinical trial performance data to accelerate or mitigate events in drug development programs.

Understanding Predictive Analytics Fundamentally, predictive analytics is a discipline of business processes, statistical models, and technology combined to collect, manage, transform, and represent data in future scenarios. Typically, these scenarios are viewed in a tabular or graphical manner. Predictive analytics is also considered in conjunction with business intelligence (BI) technologies and practices. BI, however, results in a presentation of data, whereas predictive analytics takes that representation to the next stage—outcomes-based forecasts. MIT’s Sloan Management Review, in collaboration with IBM, produced a research paper on predictive analytics with a definition of analytics that points to a combination of data and data’s inherent predictive nature. In essence, (predictive) analytics is “the use of data and related insights developed through applied analytics disciplines…to drive fact-based planning, decisions, execution, management, measurements, and learning. Analytics may be descriptive, predictive, or prescriptive.”1 The use of predictive business analytics has fused “what then,” “why then,” “what now,” and “what tomorrow” queries into a single-threaded retrospective and prospective means (or end). For example, the results from querying an investigator’s track record in patient recruitment as well as instream clinical trials performance, when aggregated, become leading indica-

x

Peer Reviewed    45

Figure 1  Sample Life Cycle Flow in Predictive Analytics Predictive Analytics Flow: From Data to Scorecard Clinical Trial Example

Clinical Trial Data Items: Collect and Aggregate

• CRF data entered • Activated sites • Contractual planning milestones

Cycle Time and Quality Leading Indicators: Categorize and Optimize

• Patient visit on time • CRF entry on target • Site monitoring report quality

Scorecard: Visualize, Model, Decide

• Why was the visit late? • Weigh against predictive patient and site models • Decision— drop or keep investigator

Note: CRF = case report form.

tor information for real-time decisionmaking. Figure 1 depicts an example of a predictive analytics data flow and an evolving process of leading indicators, and their potential practical use in this situation.

Those data-driven organizations that have transformed themselves throughout form, function, and

themselves throughout form, function, and culture have truly taken advantage of predictive analytics and its benefits for shaping the future direction of their companies. During a trial’s clinical phases or its process stages (startup, conduct, closeout), predictive analytics can also be understood as a “continuous gearworks” machination of clinical planning, business process, technology, data compilation and response, and organizational adoption. The gearworks concept underscores a key theme in this article.

The Continuous Gearworks Concept In today’s typical pharmaceutical organization, clinical trial performance is gauged through dashboards and retrospective metrics across business units. Spanning the clinical trial process stages, operational data are captured and presented in a variety of methods: disparate static spreadsheets, centralized clinical trial management systems (CTMS), or at a minimum as data that are routed from distinct clinical systems to central repositories for subsequent rollup and dissemination.

culture have truly taken advantage of predictive analytics and its benefits

Figure 2 4,500 Companies Surveyed and Percentage Reporting the Level of Analytics Adoption

for shaping the future direction of their companies. Predictive analytics is also understood from a cultural or organizational perspective. Research describes potential organizational adoption cultures as aspirational, experienced, and transformed. Figure 2 presents three types of organizational cultures that undergo a level of change or adoption related to predictive analytics. Those data-driven organizations that have transformed

x

46    Monitor August 2012

Transformed 24%

Mature, strong culture adapted to utilizing an analytics program throughout organization, seeing positive results; robust toolset complementing new culture

Experienced 45%

Moderate use in the organization; new ideas and innovation flow; weak leadership to execute; functional silos use reporting as a start

Aspirational 32%

Early adopter, tactical use of reporting; primary tool is still the spreadsheet

Note: Adapted from MIT’s Sloan Management Review 20111

As technology continues to advance, data aggregation systems that normalize data across clinical business processes, across related clinical programs, and across clinical systems evolve in maturity. With globally managed trials, and an ever-present swarm to potential patient populations, the challenge persists, however, as to how to consolidate this exponentially greater dataset across disparate data sources and across the clinical trial partner conglomeration of companies. Once data consolidation is resolved, we enter the data mining paradigm shift from simply viewing data as they are (or were) to asking critical questions of “why” and then “what if.” Kruetter describes how predictive analytics is used to respond to performance data and improve operational planning. He suggests that an evolutionary process or cycle occurs between planning and execution; however, analytics is required to play a part in the cycle and in the field, as this method brings to the forefront the understanding of why

plugged into modeling, thus linking clinical trial execution and real-time strategy/forecast plans. The predictive analytics approach and convergence of clinical planning, sponsor and investigator operational performance data, and the clinical trial process are applied here organizationally and culturally, and illustrate the concept presented earlier to describe predictive analytics as a continuous process or “gearworks” (see Figure 3). The use of predictive analytics is evident in how, over the past decade, large pharmaceutical companies in particular have tied their traditionally separate clinical functional groups together to perform clinical functions more centrally. Generally speaking, clinical trial technology products or tools used by these groups have followed course, as observed by the technological convergence of the electronic data capture products, interactive voice response services, and CTMS applications that are so prominent in the clinical technology market today. Data aggregation

The applied use of predictive analytics in adaptive trials becomes critical, as it enables companies not only to improve target enrollment but also act on patient data, model multiple drug supply futures and locations, and limit overage. events are happening, and not just the blind acceptance of operational results as historically achieved.2 With today’s dynamic global organizations, clinical trial operation groups are equipped to leverage all data to inform them of the “why” and “what next” queries. Data are now accessible and valuable within and across a pharmaceutical company. The literature provides case examples of operational managers who evaluate site and physician performance in ways that were never considered before. How and when investigators access sponsor websites, the finer details of their usage, and their performance patterns are all considered and

strategies are trending in this manner as well. At the coattails of this trend is a constant need for clinical integration spanning all business units within a biopharmaceutical company, which extends to clinical service providers and downstream healthcare sites.3 With modern use of predictive analytics tools, having access to an on-demand cross section of a pharmaceutical company’s functional layers enables different managers to gain integrated and hybrid insight into the success of a particular clinical trial. For example, a drug product manager in marketing now has the capability to view the clinical/data management operational performance metrics of a

Figure 3 The Continuous Gearworks of Predictive Analysis

Data Flow Trial Process and Execution • Predictive Analytics • Clinical Planning • Organizational Adoption • Operational Performance

drug as it undergoes trial safety and efficacy.

Targeting Patient Enrollment with Predictive Analytics As mentioned, predictive analytics within the clinical trial space evolved from traditional or functional practice, such as drug supply and patient recruitment. Drug distribution modeling and forecasting procedures were fairly straightforward and common on trials where dosing strategies were defined upfront and did not vary. However, the evolving practice of conducting trials vis-a-vis “adaptive trial management” has resulted in drug supply forecasting becoming more complex because of the dynamic and varying need for multiple dosing strategies during a trial. Targeting patient enrollment at onset and responding to patient visits or events proactively become more pivotal in adaptive trial design, as patient enrollment has a knockon effect downstream to a follow-on drug distribution plan. Therefore, the applied use of predictive analytics in adaptive trials becomes critical, as it enables companies not only to improve target enrollment but also act on patient data, model multiple drug supply futures and locations, and limit overage, given the variable nature of the adaptive trial process.

x

Peer Reviewed    47

Value and Benefits Core benefits of predictive analytics can be witnessed in all business units that play a part in the clinical trial life cycle and affect downstream patient outcome events. These benefits equate to optimization, real-time access to performance data, and consolidation of all clinical trial activities as a whole. Sales, marketing, clinical operations, and investigator and patient communities may all benefit from this type of approach. From a financial perspective, these benefits relate directly to time, money, and market share. Indirectly, these benefits correlate with cycle time or timeliness against plans, decreased costs or increased labor efficiencies in study startup or conduct, and quicker time to market, thus greater likelihood for increased revenue.4 Quality is an omnipresent value and benefit affecting a company’s ability to gain market share and aid the common good. Figure 4 lists a sample set of trial activities, leading indicators, and benefits using the predictive analytics approach. Are companies willing to invest to gain a competitive lead? With an enterprise technology and a predictive analytics discipline that could potentially change an organization, a total cost of ownership for such an investment could reach $1 million upfront, depending on the type of implementation approach and extent of organizational commitment. Compared to the Tufts Center for the Study of Drug Development’s 2006 overall new drug product metric cost of $1.2 billion, an upfront 1% investment could potentially mitigate or reduce the financial burden for a single drug and other products in a company pipeline.5 Figure 5 illustrates a 57% increase of organizations acknowledging a competitive advantage by investing in and applying a predictive analytics approach to their activities. Overall, the practice’s contributions toward one company’s ability to dis-

x

48    Monitor August 2012

Figure 4 Advantages of Using Predictive Analytics During Startup and Study Conduct Phases Activities (Planning, Monitoring, Decision Making)

Planning

Decision making

Leading Indicators and Benefits (Costs, Time, Quality)

Cost

Time

Quality

Overall study timeline

Monitor daily and monthly progress from outset, reducing enrollment delays, biggest factor in trial delays and costs

Site activation and patient screening trends

On-time or ahead against planned dates; more patients screened in less time

Clinical forecast and drug supply planning

Drug per patient planning and forecast accuracy

Site performance, monitoring visits and reports

Better quality against previous performance and track record

Ongoing patient enrollment and recruitment

Intuitive access to real-time enrollment and ability to change projections and determine outcomes

Adverse event monitoring

Manage safety reporting and timelines against recorded trends

tinguish itself from the rest, or gain the competitive edge, make predictive analytics such a great benefit to its users. For example, investigator management has become a dog-eat-dog world of finding that miracle site location that has access to uncounted quantities of patients. Queries on investigator demographics, patient access, and performance cross our desks every day. Contracting with the best and most agile sites, which in turn have these prized patient lists, is a major leap

for companies to gain a competitive advantage. Any prized lists (and clinical data in general) continue to manifest in exponential x-bytes inside the hub of any company today. Contextually applying meaning to clinical data and retrospectively reviewing what transpired are “old” news. The capability to apply meaning to these data to drive the future outcomes and conduct of clinical trials is a whole new direction, or is it? Perhaps we need to “go back the future.”

Figure 5  Companies Gaining a Competitive Advantage When Using Predictive Analytics

2010

37%

2011

0%

58%

10%

20%

30%

Note: Adapted from MIT’s Sloan Management Review 20111

40%

50%

60%

70%

Back to the Future Decision-support systems were catchy products in the technology industry during the 1980s and 1990s, in part due to the explosion of expert systems and advancements made in database technologies. However, organizationally and culturally, companies were not prepared to understand data intuitively or to use data to drive decisionmaking, despite the technological maturity. Therefore, the technology industry retreated from providing decision-support systems in favor of more componentized technologies (application servers/databases, reporting tools, middleware), which companies could purchase separately and bolt onto their existing information technology infrastructure over time. With the advent of predictive analytics, we are witnessing a return to composite platforms, with components being built together naturally or via legacy fusion. Perhaps we need to revisit the construct of a decisionsupport system methodology as the driver for predictive analytics. With today’s technology, the ambition of using systems to tell decision makers what to do is no longer inconceivable; it becomes a practical means to an end, which is a competitive advantage. What separates today’s environment from the one prevalent during previous attempts to leverage decision support methodologies is that companies are committing to organizational changes to embrace decision-support and data-driven cultures. The push for data-driven productivity from the top down proliferates a culture that decides for the future based on current results. Trade literature reports that industries such as sports, retail, and commerce have demonstrated significant improvements in competition that they attribute to their embodiment of predictive analytics dispersed throughout the layers of their organizations. Further, the volatility in the clinical trials arena today, coupled with the challenges of staying ahead of the competition while maintaining or decreasing service costs, means that attempts

at leveraging technology without the organizational or embedded use of predictive analytics are futile.1 Simply put, the key to the attractiveness and success of today’s decision-support technology, such as predictive analytics, is its symbiosis with organizational change, which reflects the exchange between technology and culture. Much has changed since 20 years ago, when organizational change

what happened in the past. With as much uncertainty as there is about the future, technology plays a lesser role to some extent in predicting things to come. Human thought still prevails in crafting that vision, as well as in telling stories of the past. Predictive analytics, tightly coupled with the applied organizational embodiment of a data-driven culture and thought-based operations,

What separates today’s environment from the one prevalent during previous attempts to leverage decision support methodologies is that companies are committing to organizational changes to embrace decision-support and data-driven cultures. and data-driven cultures were concepts not readily digested and embodied in corporate strategies. However, perhaps technology has not changed so much as has the strategy of driving organizational behavior to making (better) decisions based on real-time data. This is the real reason for the shift or movement back to decision-support thinking and to the future of predictive analytics.

Reflections on Predictive Analytics The focus of this article is the pharmaceutical industry’s pursuit of effective clinical trials and the benefit of using predictive analytics to achieve that goal. We have assessed that a one-time, 1% financial investment in analytics technology and organizational change to reach this propelled state of efficient clinical trial maturity is microscopic compared to the billions of dollars potentially wasted and retrospectively unaccounted for per drug program. Enterprise technologies have advanced in ways that enable organizations to consolidate legacy data silos or to place business analytics applications on top of these silos for bird’s eye snapshots. However, technology alone does not answer questions about

sets the stage for a definitive formula to achieve the ultimate goal, a competitive edge that can be measured by optimized clinical trials, reduced costs, and increased revenues. The formula is a “continuous gearworks” of process, technology, data outcomes and response, and organizational cultures empowered to embrace and take on decision-making responsibilities.

References 1. Kiron D, Shockley R, Kruschwitz N, Finch G, Hayclock M. Fall 2011. Analytics, the widening divide. MIT Sloan Management Review. Report, 22 pgs. 2. Kreutter D, as interviewed by Kiron D, Shockley R. 2011. How Pfizer uses tablet PCs and clickstream data to track its strategy. MIT Sloan Management Review, August 25, 2011. p. 2. 3. Kumar AR. 2011. Rising to the challenge. Pharma 7(4): 38. 4. Lewis A. 2009. Enrollment planning for critical path studies. Applied Clinical Trials Online, November 2009, p. 2. 5. http://csdd.tufts.edu/research/research _milestones.  April Davis, MS, has more than 20 years of high-tech experience, predominantly in information-creation and serving solutions. Over the previous 14 years, she was engaged in the life sciences industry, including consulting and implementation in the areas of business analytics, clinical trial process reengineering, safety and pharmacovigilance application development, product strategy, and clinical data integration. She also serves as a member of the Steering Committee and special advisor to the Metrics Champion Consortium. She can be reached at [email protected].

x

Peer Reviewed    49

For more than 18 years, ExecuPharm has provided talented professionals with opportunities for personal and professional growth. We work with some of the world’s foremost pharmaceutical research and biotech organizations to match exceptional people with exceptional opportunities.

Diverse, professional talent. That’s what we’re made of. With benefits that go beyond the basics and a commitment to championing a culture of diversity, we are seeking results-driven individuals at every level of experience from all areas of the clinical research industry. Qualified professionals can join our network for a career in: • • • • • •

Site management Site monitoring Study management Data entry Data management Clinical programming

• • • • •

Project management Medical writing Quality assurance Regulatory affairs Drug safety

Show us what you can do and we’ll show you opportunity! Submit your resume by visiting www.execupharm.com/careers.html.

www.execupharm.com 610.272.8801

Carmen R. Gonzalez, JD

I s s u e s

Peer Reviewed

i n C l i n ic a l

Study Withdrawals

Follow the Reason to Find the Solution

Most reasons for study withdrawal fall into common categories that allow sites to plan ahead to avoid the most ordinary study withdrawal pitfalls.

R e s e a rc h

A

lthough half the battle of a successful research campaign is enrolling adequate numbers of patients, it is equally important to retain those patients to preserve data integrity and ensure the capacity to file study results. Over the years, retention case studies have revealed that most reasons for study withdrawal fall into a few basic categories, with the remaining types being particular to a given study design. These common categories represent an advantage to clinical sites, since they allow sites to plan ahead to avoid the most ordinary study withdrawal pitfalls. This article reviews the core withdrawal reasons and suggests methods to overcome each of these hurdles (see Figure 1).

Personal Reasons Across a variety of therapeutic areas and disease stages, the primary reason for discontinuing participation in studies cited typically concerned events in the subject’s life that undermined compliance. These reasons included job changes (schedule shifts or unemployment), school schedule alterations, home life disruptions (divorce, death, and calamity), relocation, and study fatigue. For lengthy studies, the likelihood of some personal event influencing withdrawal risk is high. Anticipating these events requires steady conversation with subjects at every visit to inquire about their everyday lives. Likewise, having resources and guidance to overcome these challenges is the linchpin to success.1 For example, if job loss occurs, transportation needs of the subject may become heightened. Offering public transportation vouchers, gas cards, or van rides are options that, if provided universally to all subjects, can address prospective fiscal burdens placed on an unemployed subject. When major life disruptions, such as death and divorce, arise, providing information on local support resources can provide a “lifeline” to a subject in need while helping him or her stay on track. Recognizing and addressing the subject’s emotional status can help build trust while offering muchneeded support.2 To prevent withdrawals owing to study fatigue, several approaches may be useful for affirming subjects’ interest in the research and supporting their study compliance. Providing tools that support study responsibilities are helpful (e.g., travel containers for study medication, pillows and blankets for extended onsite visits, portfolios for study diary and other materials). Another tactic for retaining study interest is providing subjects with periodic newsletters that highlight the contributions that patients make toward scientific research, sharing disease management information, and offering updates

x

Peer Reviewed    51

Figure 1 Common Study Withdrawal Reasons Noncompliance

Health & Safety

Personal Reasons Randomization Errors

Lost to Follow-Up

on study progress. Finally, there may be unrealistic expectations of adherence that do not align with the subject’s life (e.g., multiple visits required in a short time frame that do not pair well with long travel commutes by the patient).2

Health and Safety Although health and safety issues that arise vary by disease indication and protocol, generally these matters can be grouped as unfavorable side effects. In some cases, these side effects can be predicted, so it behooves a clinical site to prepare a patient in advance for this possibility and to have coping suggestions in place beforehand. For example, if in a diabetes study there is a known hypoglycemic issue that may arise, it is recommended that sites receive training on how to introduce the topic of hypoglycemia with patients and how to properly suggest and distribute glucose tablets. In some instances, personal issues mentioned previously can affect health and safety concerns. For example, if the subject population in an HIV study is engaging in high-risk behavior for HIV infection, such activities may include sex work—an activity that also places the subject at risk of criminal arrest. Having candid discussions about consistent medication adherence before possible incarceration events can help to secure adherence.

Noncompliance When subjects experience study fatigue— a phenomenon in which patients tire

x

52    Monitor August 2012

of their study duties—noncompliance often follows. Study fatigue may be due to the cumbersome nature of a given protocol or the lack of health improvement observed by the subject. Where weariness is suspected, greater support may be provided to overcome challenges.3 For example, in a schizophrenia study where homeless subjects are enrolled, there will be greater basic survival needs, which render a study design more complicated. Obtaining food and shelter, particularly during winter months, becomes a quest in itself, so adhering to a medication schedule is all the more arduous. Accordingly, providing all subjects with access to meal cards, socks, and gloves at study visits is a useful tactic, as is arranging for storage of the subject’s medication offsite at a homeless shelter. Furthermore, working collaboratively with social workers, where possible, can help secure study adherence.

site coordinators differentiate between them. This is an obvious mistake that can be avoided by instituting processes and procedures that guard against its occurrence. Specifically, equipping site coordinators with inclusion/exclusion cards and prescreening sheets will aid them to accurately and consistently perform their enrollment activities for each study under way.

Lost to Follow-Up

References

This is a consistent category of study withdrawal, no matter the disease indication, and is one where site coordinators can exercise greater initiative.4,5 When patient contact is first established, obtaining backup contacts in a subject’s social network is a good idea. It is also wise to provide subjects with contact update postcards so they can alert you to any unexpected relocation(s). Should a missed appointment occur without notification, follow-up with the patient and/ or his contact reference can be activated right away. In some cases, a little encouragement is all that is needed to maintain participation (e.g., “You’ve got one last visit to go!”). Be sure that your follow-up contact with individuals is in their native language.

1. Nicholson LM, et al. 2011. Recruitment and retention strategies in longitudinal clinical studies with low income populations. Contemporary Clinical Trials 32(3): 353–62. 2. Mohr D, et al. 1999. Treatment adherence and patient retention in the first year of a Phase-III clinical trial for the treatment of multiple sclerosis. Multiple Sclerosis 5(3): 192–7. Available at http://msj.sagepub.com/content/5/3/192 .short. 3. Becker M, et al. 1980. Strategies for enhancing patient compliance. Journal of Community Health 6(2). 4. Rosen S, Fox MP, Gill CJ. 2007. patient retention in antiretroviral therapy programs in subSaharan Africa: a systematic review. PLoS Medicine 4(10): e298. [doi:10.1371/journal.pmed .0040298] 5. ICH E6 Good Clinical Practice: Consolidated Guidance, Section 4.3.4. 

Randomization Errors Failing to accurately enroll subjects invariably creates a study withdrawal later on. When a site is handling multiple studies within the same disease indication, the risk of confusion is compounded without tools to help

Conclusion Risk of withdrawal is a recurrent threat to study completion, but with ample planning and appropriate support measures, it can be reduced and effectively managed. Start with answering why the risk arose, and you will discover how to combat it. Be sure to consult with your local institutional review board and regulatory or compliance officials to ensure that the remedies you seek to apply at your site are appropriate.

Carmen R. Gonzalez, JD, has served as the manager of strategy and communications for Healthcare Communications Group (HCG), a global recruitment and retention firm headquartered in Los Angeles, Calif., for the past three years. She strategically guides her firm’s clients on the use of social media and new technologies within a larger framework of recruitment and retention initiatives. Prior to joining HCG, she worked for five years across the healthcare and software industries, applying her marketing, writing, and web programming talents to national business development campaigns. Her initial six-year career as a litigating attorney informs her business acumen. She can be reached at [email protected].

Wendy Boone, RN, MPH, CCRC, CCRA | Jennifer Zimmerer, MS, RD, CCRP Kimberly Kreller, RN, BSN

I s s u e s

Peer Reviewed

i n C l i n ic a l

CRC Primer

Tips for Achieving Operational Excellence

Current project managers share observations about factors that would have made them more effective and efficient when they were CRCs.

R e s e a rc h

O

ver the past several years, clinical research has undergone many changes. For example, regulations have been updated; new guidance documents have been provided; and a U.S. government website highlighting clinical trials (ClinicalTrials.gov) has been established. As research practices and procedures change, so too do the associated costs. Recent economic trends have led to a limiting of the funding that can be provided to conduct clinical trials. For this reason, it is extremely important for clinical research sites to run efficiently and effectively; in this regard, clinical research coordinators (CRCs) are essential for managing and ensuring success of the research site. Although CRC training is on the rise, many CRCs remain self-taught and “learn as they go” when new studies are initiated. This article incorporates information learned by former CRCs now working as project managers for contract research organizations (CROs) and pharmaceutical companies. They share important observations about factors that would have made them and their sites more effective and efficient when they were CRCs.

Know Your Site Not all research sites are created equal. The capabilities of any research site are highly dependent on the type of site: research only, academic, private practice, or a combination of these. Similarly, principal investigators (PIs) and the dynamics of sites tend to vary. Although this sounds obvious in principle, it is not unusual for some clinical research professionals to lump all sites together, thinking they function the same because this is the easiest way to manage multiple sites. The PI and the site CRC are the professionals with the most knowledge about their research site. They can determine what will work, what will not work, and the best way to accomplish the mission within the expected timeframe per the business model. As the PI is approached about new clinical trials, site feasibility questionnaires and pre-study visits are completed. The feasibility questionnaire should be completed with the most current information.1 Once selected, the research site is provided with a clinical protocol, which may or may not reflect the information provided at the time of the pre-study visit. Well before this time point, the CRC must review and assess the clinical protocol from an operational perspective, in order to ensure that the study can be undertaken effectively at the site.

x

Peer Reviewed    53

Although the PI will agree to conduct the study with the sponsor, the CRC can play an active role with the assessment of the protocol and how it will work operationally within the

(or declines to agree in the first place) typically are unfounded, because the money and time saved by not continuing on with, or even initiating, a low-producing site is beneficial to the

It is not unusual for some clinical research professionals to lump all sites together, thinking they function the same because this is the easiest way to manage multiple sites. practice. For key questions to ask, see Figure 1. Instituting such measures as a “mock” study visit can help determine if the study will work at the site. Although most PIs would rather continue with a study despite negative responses to the questions listed, they need to determine if it is best to participate. Concerns that sponsors or CROs will develop negative views of the site if the PI pulls out after study acceptance

sponsor/CRO. Thus, the site should not be excluded from opportunities to participate in future studies. Cost savings in such instances are not only realized by sponsors/CROs, but also by sites. The time and effort spent preparing for startup and subsequent recruitment for poorly chosen studies, or on attempts to keep subjects engaged for the duration of the studies, can be better spent on studies that are more suitable to the site. Tracking past per-

Figure 1  Key Questions to Ask Related to Site Knowledge What are the inclusion/exclusion criteria? Do we have experience with this patient population? ●● Do the patients we can recruit meet these criteria? ●● Will our patients want to participate? ●● How many patients will be eligible after screening? ●● Of these eligible patients, how many will consent to be in the study? ●● ●●

What tests and procedures are needed? ●● Do we have experience with these tests? ●● Are we comfortable with the capability and availability of the resources that will be utilized for the completion of these tests and procedures? ●● Are additional tests, not easily accessible to our site, required for study completion? ●● Do we have the appropriate connections with specialists to complete required testing? ●●

●● ●●

●● ●●

Don’t be Afraid to Learn and Challenge Clinical research is a highly regulated field, with updated regulations and new guidance documents appearing frequently. Site standard operating procedures need to be in place, reviewed, and updated regularly. The PI needs to ensure site staff are adequately trained and demonstrate competency2 in the protocol, protocol-related tasks, investigational product, regulatory requirements, acceptable standards, and protection of human subjects.

The CRC must review and assess the clinical protocol from an operational perspective, in order to ensure that the

Are current staff members comfortable with the recruitment timeline for this study?

study can be undertaken

Will enrollment be competitive (considering that, based on the total population at the site, approximately 10% will be able to be recruited due to inclusion/exclusion criteria and general enrollment issues)?

effectively at the site.

Have similar studies been done in the past? Are there competing studies by the same or other practitioners in the practice, at the same academic institution, or in a practice in the same building?

Is completion of the study visits as planned achievable for our subjects and our staff? What is the time commitment for the subjects and the staff? ●● Are other offices/services involved with the study? ●● Can the schedule of activities be coordinated all in one day or within a protocol-driven time period? ●● ●●

●●

Is this a study we can commit to and complete within the expected timeframe?

●●

Can we afford to do this study?

●●

formance on previous studies can help CRCs identify potentially successful future studies, as well as opportunities for improvement. Furthermore, when studies are under way, careful monitoring of invoices and payments by CRCs can mean the difference between a successful clinical site and one that will struggle to maintain profitability and control. In short, know what your site is capable of; commit to only what you can do; and complete what you agree to.

Why are we doing this study (i.e., to help the patients, for money, as part of growth plan, etc.)? Does it fit our overall goal and mission?

x

54    Monitor August 2012

CRCs need to explore the regulations and understand “gold standards” (i.e., best practices), as well as the International Conference on Harmonization’s Good Clinical Practice (ICH GCP) Guidelines.3 Although the basic processes of clinical research might at first lend themselves to being easily learned, the intricate nature of the regulations and processes of clinical research can prove daunting and lead to a less than optimal research practice. Therefore, CRCs should not be afraid to

request further protocol training at any time during a clinical trial. Due to the interactive nature of their relationships with clinical research associates (CRAs) and project managers that work with the site, CRCs are in a unique position to question, affect, and change practice in beneficial ways. CRAs can serve as an invaluable resource to sites by providing examples of practices that have been implemented at other sites successfully for subject recruitment, implementation of the study protocol, and maintenance of the site master binder, as well as numerous other items. As CRCs identify areas of improvement, training can be offered to other staff members and core competencies can be further developed to maintain adequate training levels.4 Questions to ask when assessing site practice are included in Figure 2.

CRCs are in a unique position to question, affect, and change practice in beneficial ways. Although it is important to learn from other clinical research professionals, it is also important to challenge existing paradigms within the practice and question requests from CRAs or sponsors/CROs when these seem out of line or incongruent with the regulations. It is not unusual for a sponsor/CRO to want to standardize all practices for a clinical trial across all sites. Although this can be vital to successful data collection for the study, it is not always the most conducive practice for the site. The point here is that CRCs need to be able to distinguish reasonable expectations from ones that should be challenged. Successful completion of a clinical trial relies on efficiency, compliance, and patient safety, so sponsors/ CROs appreciate constructive input that improves these factors if it is presented appropriately and thoughtfully.

Figure 2  Questions to Ask When Assessing Site Practice Is any specific decision being made based on the regulations, or because we have always done it this way? ●● Do I have the information I need to help our site make this decision? ●● Do I know where to find information related to the federal regulations, guidance documents, ICH GCP Guidelines, and gold standards? ●● Do I have the resources that I need to effectively execute this study? ●●

Create it, Organize it, Own it It is common for healthcare workers who are not involved in clinical research to minimize or trivialize its practice. Clinical research at the site may seem to many to be easy compared to clinical practice, yet it is in fact very rigorous because of its regulations, standards, and scrutiny by governmental authorities. It is up to the site CRC to work to create an atmosphere of acceptance and collaboration within the practice. The CRC is in an optimal position to assist integration of subjects’ research visits into the clinical practice. Being available to identify and explain research studies to office staff and potential participants is fundamental to a successful research practice. The CRC is able to build a unique bond with the site research subjects, providing continued support and open lines of communication that promote trust among healthcare workers. In order to work efficiently and effectively, the CRC must be prepared for new research studies, provide for early study startup, develop organization systems to maintain compliant research study files, and ensure appropriate invoicing and reimbursement for work completed. Setting up clearly defined systems for this can be a difficult task that at times can seem complex and too difficult to manage with the daily workload.

However, the time spent to identify and set up the organization systems is well worth the effort. It helps sites to avoid grief as they introduce more studies or prepare for audits. Potential systems to be set in place at the site level are listed in Figure 3.

CRCs need to be able to distinguish reasonable expectations from ones that should be challenged. Once these systems are put into place, the CRC needs to use them effectively, because he or she, with the oversight of the PI, is in a unique position to affect research practice through consistency and structure. Ownership of maintaining a high-functioning research site can be stressful; therefore, the CRC should work to understand her/his limitations and what can be done to improve performance. Questions CRCs can ask to determine their ability to manage sites with agility are noted in Figure 4.

Conclusion By being constantly alert and re-evaluating the status of the clinical research

Figure 3  Potential Site Systems A site biography, with copies readily available to provide to sponsors/CROs Study start-up packets to be used when the site is approached with a new study ●● A filing system that ensures trial master file documents are reviewed, signed, and filed in a timely manner ●● A tracking system for invoices and payment ●● A documentation system for training (new and recurring) ●● A version control system to ensure only the current versions are used ●● ●●

x

Peer Reviewed    55

Figure 4 Questions for CRCs to Assess Their Own Abilities Am I willing to work efficiently and effectively? Do I understand how our practice works and what is expected of me? ●● Do I understand the communication structure of our practice? ●● Am I communicating effectively with the PI and other study staff? ●● Do I know what it takes to run this research practice? ●● What can I do better? ●● ●●

projects, CRCs can remain ahead of potential problems. A successful CRC is one who can provide efficient and effective management of the clinical site while understanding both the needs of the site and of the overall research project. In an ever-changing global economic climate, sites need to be proactive with clinical research opportunities. The CRC plays an important role in understanding site potential, structuring the site to maximize opportunities, and maintaining site compliance.

domized clinical trial. Arthroscopy 19(8): 882–4. 2. U.S. Food and Drug Administration (FDA). Guidance for Industry. Investigator Responsibilities—Protecting the Rights, Safety, and Welfare of Study Subjects. October 2009. Available at www.fda.gov/downloads/Drugs /GuidanceComplianceRegulatoryInformation /Guidances/UCM187772.pdf. Accessed December 15, 2011. 3. International Conference on Harmonization (ICH). ICH Good Clinical Practice E6 Revision 1. February 2006. Available at www.ichgcp.net. Accessed December 15, 2011. 4. Wool L. 2008. Good training practice 101: a primer for employee training plans. The Monitor 22(3): 55–61.

References 1. Dainty K, Karlsson J. 2003. Factors to consider in determining the feasibility of a ran-

Wendy Boone, RN, MPH, CCRC, CCRA, is a project manager at ClinicalRM and has 11 years of pro-

gressive research experience. She has worked as a clinical research coordinator, manager of coordinators, and clinical research monitor at a pharmaceutical company and CRO and has provided training for clinical research professionals at all levels. An ACRP member, she is an instructor for the Fundamentals of Clinical Research course for ACRP. For this article, she provided substantial contributions to concept development, drafted and revised the text, and gave final approval of the version to be published. She can be reached at [email protected]. Jennifer Zimmerer, MS, RD, CCRP, is a program manager for a government-sponsored Phase I clinical trial program at ClinicalRM and has 13 years of clinical trial operations experience in the academic, commercial, and government arenas. Having started her career as a research coordinator and advanced to a management role in clinical research, she is currently involved in clinical research training for professionals at all levels. For this article, she provided substantial contributions to concept development and revised the text, and gave final approval of the version to be published. She can be reached at [email protected]. Kimberly Kreller RN, BSN, project manager at ClinicalRM, has 12 years of research experience. She has worked as a research nurse coordinator, was appointed as the first regional research manager for a large healthcare system, and was promoted to director of regional research. Her greatest accomplishment has been the ability to increase awareness of and accessibility to clinical trials. For this article, she contributed to concept development, contributed to the drafting and revision of the text, and gave final approval of the version to be published. She can be reached at [email protected].

PharmaTimes

US Clinical Researcher of the Year is the only competition in the USA to give you a unique opportunity for personal development and the chance to test your professional skills through a program of competency-based exercises

Enter now! www.usclinicalresearcher.com

Sponsored by:

56

x

Monitor August 2012

In association with:

Suzanne Heske, RPh, MS, CCRA, BCNP

C o l u m n s

CRA CENTRAL

Modernizing Monitoring The Case for Risk-Based Monitoring

I Although seemingly full of promise, risk-based monitoring must be used in collaboration with other means in an effort to reach the goal of modernizing operational efficiencies.

nnovative monitoring practices can open the door to the future, and risk-based monitoring is one plausible approach to take. However, risk-based monitoring is no substitute for face-toface site contact, physical eye contact with source documentation/data, and visual inspection of investigational products or medical devices. Thus, it must be used in collaboration with other means in an effort to reach the goal of modernizing operational efficiencies. In the summer of 2011, the U.S. Food and Drug Administration (FDA), through its Human Subject Protection/ Bioresearch Monitoring Initiative, published a draft guidance document outlining strategies and plans for modernizing investigational study monitoring.1 Before reviewing the FDA’s thought process and recommendations surrounding implementation of such strategies, I wish to share a simple thought that I think speaks volumes about monitoring in general. I noted the following quote in an article I read earlier this year in Pharmaceutical Technology, and I find it to be applicable for this discussion: “Technology may expedite operations, but the absence of the human element could cost dearly.”2 This sentiment is just the opposite of what sponsors may achieve when moving into more innovative models to ensure subject safety, as well as data quality and integrity.

What’s Going on Here? At this point, it would be prudent to define risk-based monitoring. This

activity is not just an onsite monitoring; it can be described as a risk assessment program that looks at overall study conduct and data collection processes to assess where or what might go wrong from a prospective angle, rather than retrospectively. The key is to apply risk assessment and mitigation processes at the beginning of the protocol development phase, as monitoring is just one piece of ensuring quality and integrity of clinical trials. Priorities in trial conduct, in data collection, and in how and when monitoring will be applied can be established early to identify the critical data or process(es) that may go awry. In the past, sponsors have interpreted the FDA and International Conference on Harmonization (ICH) guidances (i.e., ICH E6) to imply clinical trial monitoring equates to frequent onsite monitoring and 100% data verification of all clinical trials.1 However, after reviewing the recent draft guidance, there is clearly a shift in regulators’ thought processes of what constitutes oversight with respect to subject safety and the quality and integrity of data. Within the pharmaceutical industry, it is felt that the intent of the draft guidance document is to let sponsors know the FDA realizes a variety of approaches can be used to fulfill clinical trial monitoring responsibilities. The guidance describes strategies for applying “modern” monitoring activities, which focus on critical study parameters, thus making it feasible to

x

CRA Central    57

rely on a combination of monitoring methods by incorporating “centralized monitoring practices” to effectively oversee a study.1 At this juncture, the FDA is indicating a move is under way to facilitate operational changes that will promote a new monitoring paradigm (i.e., ensure that the FDA Compliance Program Guidance Manuals 7348.810 and 7348.811 are compatible with approaches described in the recent draft guidance).1 In addition, the draft guidance articulates the FDA’s recognition that alternative approaches are valuable and necessary in order to facilitate change and enhance the effectiveness of industry’s monitoring practices. Furthermore, the FDA recognizes that risk-based monitoring, including the appropriate use of centralized monitoring and technological advances, can meet regulatory requirements if applied under suitable circumstances.

Knowing When and How to Apply New Practices The draft guidance provides a detailed overview of onsite and centralized monitoring processes and how these processes factor into designing appropriate monitoring practices. The emphasis is on centralized monitoring, and how to apply remote monitoring to a myriad of monitoring activities. As the technology footprint evolves, so will centralized monitoring, as it is currently dependent upon the extent of accessibility of electronic medical records and electronic data capture systems. However, the idea of using triggering techniques to identify risk appears to be a more practical and realistic approach to modernizing monitoring processes. Cooley and Srinivasan promote the fundamental premise that monitoring sites where there is little or nothing to monitor is not a useful proposition, because statistics show us that one-third or more of the sites do not enroll a single subject, yet we still go out and monitor every site.3 The FDA further suggests that methods for modernizing clinical trial monitoring begin with identifying:

x

58    Monitor august 2012

●● inherent

risks (i.e., subject population), ●● data that are critical to the reliability of the study findings (i.e., evidence subject has the disease under study), and ●● critical processes that may be related to data reliability, protocol adherence, or subject safety (i.e., the process of obtaining pharmacokinetic samples). Furthermore, these and other factors can be incorporated into a comprehensive and detailed monitoring plan that spells out how to address these factors and others. For example, the plan may include expected steps to be taken in the event of noncompliance and specific training required for all internal and external personnel associated with the trial. Moreover, it would identify areas at risk and the steps needed to correct episodes of noncompliance. Applying the International Organization for Standardization concept of risk management to clinical trial monitoring is one methodology that allows for continual integration of the concepts of planning, doing, checking, and acting throughout the conduct of the study.

Considering the Benefits and Challenges Over time, risk-based monitoring should free up additional time for clinical research associates (CRAs) to conduct or perform other monitoring activities not associated with subject safety or data quality/integrity. The Tufts Center for the Study of Drug Development recently published global CRA workload and utilization benchmarks indicating that, prior to study completion, industry thought that 60% of a CRA’s time was spent onsite. Actually, the study showed that only 41% of a CRA’s time is spent onsite. This time allocation varies widely by geographic region, and the study indicated U.S.based CRAs spend more time onsite than those in the rest of world.4 Yes, it is indeed necessary to have a well-designed protocol that has taken

quality into consideration from the beginning. If centralized monitoring practices are going to be effectively employed, the effort will require a shift in resources, either to study management, to data management, or to the CRA, and additional time will need to be allocated for such activities to occur. What we are really talking about here is transference of workload responsibilities, as well as the need for CRAs to acquire new skill sets in order to adapt to risk-based monitoring. From a risk-based perspective, the best way to attack risk is to identify the risk, and then try to mitigate it.3 Moving forward, everyone needs to understand that the approach to clinical trial monitoring is not etched in stone. There is room to consider and apply modern operational efficiencies that will allow sponsors to move forward in bold ways to benefit trial subjects. Someone needs to take the risk-based monitoring leap of faith by creating, implementing, checking, and tweaking a risk-based clinical management plan, be it via a triggered model or some other type in order to modernize clinical trial monitoring. That’s right. No more business as usual. Let us modernize monitoring and overall data quality along the way!

References 1. FDA Draft Guidance for industry: Oversight of Clinical Investigations—A Risk-Based Approach to Monitoring, August 2011. 2. Agent-in-place: all systems slow. 2012. Pharmaceutical Technology, January 2012, pg. 16. 3. Cooley S, Srinivasan B. 2010. Triggered monitoring: moving beyond a standard practice to a risk-based approach that can save sponsors time and money. Applied Clinical Trials, August 1, 2010. 4. Tufts Center for the Study of Drug Development. 2012. Impact Report. Press Release, January 17, 2012.  Suzanne Heske, RPh, MS, CCRA, BCNP, currently serves as clinical quality assurance manager for the Kforce-Pfizer FSP alliance. She has more than 15 years of experience in clinical research, including having served as an investigator. She has 24 years of experience as a clinical pharmacist with prescriptive authority in several therapeutic areas. As a pharmacist, she holds board certification as a radiopharmacy professional in all of the imaging modalities. She can be reached at [email protected].

Kirk Mousley, MSEE, PhD

C o l u m n s

Data-Tech Connect

Making Virtual Teams Work

I

n my last column, I discussed supporting remote workers, and suggested that they are becoming more prevalent and that having them often makes business sense. I also discussed how technology was evolving to help remote workers work efficiently, and concluded by stating that some issues,

Pharmaceutical companies outsource work for several reasons, including insufficient staff, infrastructure, time, or desire to do the work being outsourced. One thing that pharmaceutical companies cannot outsource is their regulatory compliance responsibilities; they can delegate work to

The members of virtual teams need to know three important things: Who is in charge? Whose SOPs should be followed? Who are the experts to be consulted? such as communication difficulties and management attitudes, would continue to be areas that needed improvement. In this column, I want to go beyond companies supporting remote workers and discuss some of my experiences with virtual teams. I have found that many of the issues that challenge remote workers also crop up with virtual teams, and I would like to highlight some efforts to overcome these issues.

Working with the All-Star Team I am currently working as a member on three different virtual teams. Each team is comprised of workers representing several different pharmaceutical services companies that have come together to meet the needs of the end client (a different pharmaceutical sponsor company in each case), which is outsourcing clinical trial work.

other companies, but they remain on the hook regarding compliance. There are two common business situations in my experience with outsourcing: ●● small

startup pharmaceutical companies that do not have the staff, infrastructure, or standard operating procedures (SOPs) to perform clinical trials and ●● large pharmaceutical companies that have peak staffing shortages, and thus require additional resources to complete trials. Small startup companies will often outsource practically everything related to performing a clinical trial to a contract research organization (CRO), with the possible exceptions of medical and scientific oversight, for which these companies have staff. SOPs are a small subset of regulatory compliance, and

I will use them as an example in this column. Small startups may not have all the SOPs needed to run clinical trials, and thus cannot request vendors to follow their SOPs. In other words, they can’t push their SOPs upon their subcontractors/vendors. The large pharmaceutical companies will have SOPs to cover most if not all of the processes involved in conducting clinical trials. These SOPs may or may not be applicable (or available) to a subcontractor/vendor (e.g., the sponsor has not performed an electronic data capture [EDC] study and has no EDC-related SOPs). Also, the sponsor may not want to push its own SOPs upon subcontractors/vendors. In both of these situations, however, the sponsor should obtain copies of all relevant vendor SOPs, and audit the vendors during the trial for SOP compliance and other regulatory issues.

Complicating and Facilitating Factors On one team, the clinical data management department has several contractors who are either independent (like me) or working for firms that provide contract services. A further complication is that several of these contract firms are working for yet another services firm, which in turn is working for the CRO. At the other end of the spectrum, the CRO in this case is employed by a federal government services firm, which ultimately is working for both

x

DATA-Tech connect    59

the government and the sponsor. In short, there are five layers of business at play here. In every instance, all of us on these teams have the ability to work from our own offices. The technological infrastructure is in place to support our teams, and communications are performed by teleconferences, e-mails, project portals, and dedicated project managers. With all of the virtual teams in which I have been involved, there are three major issues that seem to cause most, if not all, of the difficulties: ●● Determining

who is in charge or responsible for the overall project and its various subparts. ●● Whose SOPs should the members of the virtual team be following— their own, those of their customers, or those of the end clients? ●● Who are the experts—either medical or scientific—who should be consulted in cases where decisions that affect the execution of the trial or the collection of the clinical data are beyond the subcontractor’s/vendor’s expertise? At first glance, these questions should have obvious answers; in practice, however, they rarely do. There are many reasons for this conundrum, but the major reason appears to be a conflict between who has expertise and who has the ultimate responsibility. It also seems to be a matter of who has the time and wants to assert control. However, in truth, the sponsor should delegate the authority to the different project participants as needed and as they are capable of assuming. Those with the delegated authority should in turn select (and document) the SOPs to be followed. Finally, those who have the expertise should be identified so they can supply expertise to the project when it is needed. As an example, if a sponsor hires a CRO to perform a clinical trial and delegates the authority to the CRO for performing the trial, the CRO will normally perform the trial using its own SOPs and its own workers and contrac-

x

60    Monitor august 2012

tors. The CRO should have expertise to run the clinical trial, but will need to consult with medical personnel at the sponsor for trial-related decisions. The sponsor retains the ultimate responsibility for seeing that the trial is completed in compliance with the applicable regulations, but may choose to be no more involved than overseeing such regulatory compliance and providing medical expertise as needed. More commonly, larger pharmaceutical companies will have substantially more involvement, including varying levels of pharmacovigilance oversight, statistical analysis, writing and producing the clinical study report, and preparing regulatory submissions.

Who’s in Charge Here? To return to the three issues posed above, I believe that, as a best practice, the sponsor should designate both who is in charge of the overall project and who is in charge of the individual parts that make up the project. Those in charge of the various parts should select and document the appropriate SOPs to be followed. The sponsor should then ensure that the final deliverables of data, documents, reports, and anything else contractually agreed upon meets delivery expectations, and that the receipt of these deliverables is covered by its own SOPs. Furthermore, the sponsor should identify medical experts as “go-to” persons for trial-related questions, and these experts should be available and provide timely responses. Down the line of suppliers, each person/vendor with granted authority should then assume the authority and select the SOPs, or further delegate the authority as appropriate, and identify additional experts they may have on staff. Thus, at each stage, there should be a clearly designated person in the position of authority, a clearly defined set of SOPs to be followed, and a clearly explained expert communications process. Once this has been accomplished, the applicable SOPs should then be made available for all who are expected

to follow them; this might be accomplished using a project portal. SOPs regarding training should be followed at the appropriate level. For example, the SOP covering the development of the EDC application being used should be with the vendor who has the EDC software application if it is responsible for doing the study-specific development/ configuration in that application (i.e., if it is hosting and providing the data entry system). In the absence of these steps, project team members are left in a confused state as to what they should be doing and may resort to following their own SOPs for their individual work efforts, even though these might not mesh well with those from higher in the hierarchy. A clinical trial portal that supports work flow can serve as a repository for all the SOPs, and can enforce the training to help ensure that all participants are trained on the applicable SOPs for the work they will be performing. A portal also can provide such benefits as serving as a central repository for trial issues and decisions, so that all participants can review them as needed.

It Takes More than Tech In conclusion, technology cannot solve all of the issues that may arise among teams. Remote desktop applications, Internet portals, e-mails, and teleconferences can help team members work efficiently and communicate with other team members, but it is team leadership with explicit authority roles and positions, clearly documented applicable SOPs, and clearly identified team member expertise that can best help ensure that intelligent efforts applied by all team members will guide the project to a successful completion.  Kirk Mousley, MSEE, PhD, president of Mousley Consulting, Inc. and past cochair of the ACRP Data Management and Technology Forum, has directed numerous efforts in computer application design and development, clinical database design, data editing/ cleaning, and submissions. He has 20 years of computer systems experience in the consulting, education, telecommunications, and aerospace fields, and can be reached at [email protected].

Gary W. Cramer

C o l u m n s

Off the Wire

Form and Function in the News Editors Question if Modern Science Has Become Dysfunctional

Presenting recent examinations of debates and doubts over the inner workings of the clinical research enterprise that first appeared in the Wire.

For more information about the Wire, to search back issues, or to become a subscriber, please visit www.acrpnet.org /MainMenuCategory /Resources/TheWire.aspx.

The recent explosion in the number of retractions in scientific journals is just the tip of the iceberg and a symptom of a greater dysfunction that has been evolving the world of biomedical research, said the editors-in-chief of two prominent journals in a presentation before a committee of the National Academy of Sciences (NAS) in late March. “Incentives have evolved over the decades to encourage some behaviors that are detrimental to good science,” says Ferric Fang, editor-in-chief of the journal Infection and Immunity, who spoke at the meeting of the Committee of Science, Technology, and Law of the NAS along with Arturo Casadevall, editor-in-chief of mBio®, an online, open-access journal. Both publications are produced by the American Society for Microbiology. In the past decade, the number of retraction notices for scientific journals has increased more than 10-fold while the number of journal articles published has increased by only 44%. Although retractions still represent a very small percentage of the total, the increase is disturbing because it undermines society’s confidence in scientific results and public policy decisions that are based on those results, says Casadevall. Some of the retractions are due to simple error, but many are a result

of misconduct, including falsification of data and plagiarism. More concerning, say the editors, is the fact that this trend may be a symptom of a growing dysfunction in the biomedical sciences—one that needs to be addressed soon. At the heart of the problem is an economic incentive system fueling a hypercompetitive environment that is fostering poor scientific practices, including frank misconduct. Too many researchers are competing for too little funding, creating a survival-of-the-fittest, winner-take-all environment where researchers increasingly feel pressure to publish, especially in high-prestige journals. “In the end, it is not the number of high-impact-factor papers, prizes, or grant dollars that matters most, but the joys of discovery and the innumerable contributions both large and small that one makes through contact with other scientists,” the editors write. “Only science can provide solutions to many of the most urgent needs of contemporary society. A conversation on how to reform science should begin now.” [Source: www.eurekalert.org/pub_releases/2012 -03/asfm-hms032712.php (EurekAlert!, 3/27/12)]

Large-Scale Analysis Finds Majority of Trials Provide No Meaningful Evidence The largest comprehensive analysis of ClinicalTrials.gov finds that clini-

x

OFF THE WIRE    61

cal trials are falling short of producing high-quality evidence needed to guide medical decision-making. The analysis, published on May 1 in the Journal of the American Medical Association, found that most clinical trials are small, and there are significant differences among methodical approaches, including randomizing and blinding, and in the use of data monitoring committees. “Our analysis raises questions about the best methods for generating evidence, as well as the capacity of the clinical trials enterprise to supply sufficient amounts of high-quality evidence to ensure confidence in guideline recommendations,” said Robert Califf, MD, first author of the paper, vice chancellor for clinical research at Duke University Medical Center, and director of the Duke Translational Medicine Institute. The analysis was conducted by the Clinical Trials Transformation Initiative, a public-private partnership founded by the Food and Drug Administration and Duke. It extends the usability of the data in ClinicalTrials .gov for research by placing the data through September 27, 2010 into a database structured to facilitate aggregate analysis. This publicly accessible database facilitates the assessment of the clinical trials enterprise in a more comprehensive manner than ever before, and enables the identification of trends by study type. “Analysis of the entire portfolio will enable the many entities in the clinical trials enterprise to examine their practices [compared to] others,” says Califf. “For example, 96% of clinical trials have [1,000 or fewer] participants, and 62% have [100 or fewer]. While there are many excellent small clinical trials, these studies will not be able to inform patients, doctors, and consumers about the choices they must make to prevent and treat disease.”

FDA Ahead of Canada, Europe in Drug Approval Race

[Source: www.eurekalert.org/pub_releases/2012 -05/dumc-laf042612.php (EurekAlert!, 5/1/12)]

[Source: www.eurekalert.org/pub_releases/2012 -05/yu-idr051512.php (EurekAlert!, 5/16/12)]

x

62    Monitor august 2012

The U.S. Food and Drug Administration (FDA) generally approves drug therapies faster and earlier than its counterparts in Canada and Europe, according to a new study by Yale School of Medicine researchers. The study counters perceptions that the drug approval process in the United States is especially slow. Led by second-year medical student Nicholas Downing and senior author Joseph S. Ross, MD, assistant professor of internal medicine at Yale School of Medicine, the study was published May 16 online by the New England Journal of Medicine. “The perception that the FDA is too slow implies that sick patients are waiting unnecessarily for regulators to complete their review of new drug applications,” said Downing, who decided to conduct the study because there have been no recent comparisons of the FDA’s regulatory review speed with those of regulating agencies in other countries. Downing, Ross, and colleagues reviewed drug approval decisions of the FDA, the Canadian drug regulator, Health Canada, and the European Medicines Agency (EMA) between 2001 and 2010. They studied each regulator’s database of drug approvals to identify novel therapeutics as well as the timing of key regulatory events, allowing regulatory review speed to be calculated. Canada and Europe were chosen as a comparison because they face similar pressures to approve new drugs quickly while ensuring they do not put patients at risk. The team found that the median total time to review was 322 days at FDA, 366 days at EMA, and 393 days at Health Canada.

International Experts Say Full Reports from Trials Should be Public The full clinical study reports of drugs that have been authorized for use in patients should be made publicly available in order to allow independent re-analysis of the benefits and risks of such drugs, according to leading international experts. Writing in PLoS Medicine, Peter Doshi from Johns Hopkins University School of Medicine in the U.S., Tom Jefferson from the Cochrane Collaboration in Italy, and Chris Del Mar from Bond University in Australia say that there are strong ethical arguments for ensuring that all clinical study reports are publicly accessible. In the course of trying to get hold of the regulatory evidence for the approval of the drug Tamiflu, the authors received several explanations from Roche as to why it would not share its data. By publishing that correspondence and commentary, the authors assert that the results from experiments on humans should be made available, all the more so given the international public health nature of the drug. They argue: “It is the public who take and pay for approved drugs, and therefore the public should have access to complete information about those drugs. We should also not lose sight of the fact that clinical trials are experiments conducted on humans that carry an assumption of contributing to medical knowledge. Nondisclosure of complete trial results undermines the philanthropy of human participants and sets back the pursuit of knowledge.” The authors challenge industry to either provide open access to clinical study reports or publicly defend their current position of randomized controlled trial data secrecy. [Source: www.eurekalert.org/pub_releases/2012 -04/plos-tfr041012.php (EurekAlert!, 4/10/12)] 

Gary W. Cramer is the associate editor for ACRP publications and other communications projects. He can be reached at [email protected].

Ronald S. Waife

C o l u m n s

Operating Assumptions

First, Kill All the Lawyers The first thing we do, let’s kill all the lawyers. —Wm. Shakespeare, Henry VI, Part 2, c. 1591

Somewhere along the way, those who were supposed to advise study managers in contracting and advise the enterprise in broadbased improvements became in charge of these key elements of trial conduct.

. . . and the procurement officers and outsourcing managers, while we’re at it. —Clinical Research Executive (Anon.), c. 2012

P

rofessionalizing the contracting process in clinical development has gone astray. What was once meant to add a little bit of needed legal-oriented skill to project management has become a self-perpetuating web of complexity. Here’s a true story: We were interviewing various managers of a sponsor’s clinical research department, exploring their skills at outsourcing. The departmental lawyer, the procurement officer, and the strategic outsourcing officer each said that he or she was the only one protecting the company from financial ruin, compliance risk, and regulatory sanction. Not a single clinical person had the arrogance to assert such importance to the clinical research success. How did we get here? When and why did clinical trial expertise get so buried under layers of bureaucracy of our own making? Every sponsor I visit has such a story. These days, the stories are told

as muted rumblings: monitors who report investigators furious at the site contracts sent by the sponsor’s legal department, trial schedules with builtin three-to-six month delays to allow for contract negotiations with contract research organizations (CROs), trial managers throwing up their hands at having to use the same CRO that failed them last time because “procurement likes them better.” Sometimes the storytellers don’t even sound like they are complaining. Is this a sign of healthy acceptance, a sign of despair, or is it that no one remembers when life was easier?

When Bad Things Happen to Good Ideas The overlapping sponsor roles of contracting, legal, procurement, and strategic outsourcing all grew out of the rapid growth of CRO usage starting more than a decade ago. This trend, combined with increased trial complexity, regulatory scrutiny, and budget pressures, led first to a professionalization of the clinical operations function, which took responsibilities away from medical leadership, mostly appropriately. Clinical operations diversified and specialized rapidly, and soon it was felt that study managers had too much to learn, too much to be expert in, and too much to handle—again, probably appropriately. One of the identified specialties

x

operating assumptions    63

was how to contract with the proliferating third parties. However, somewhere along the way, those who were supposed to advise study managers in contracting and advise the enterprise in broad-based improvements, such as standard contract terms and volume discounts, became in charge of these key elements of trial conduct. Indeed, at some companies they have become virtually in charge of the trials themselves, as a side-effect of negotiating with those who now actually do the trials (the CROs and sites, instead of inhouse staff).

The inappropriate empowerment of legal-oriented staff has introduced consistent and costly program delays. For instance, some companies report the procurement people now have more to say about which sites should be used than the clinical staff. Although I am the first to criticize clinical managers for continuing to use underperforming sites because they want to use key researchers or long-time friends, it is no better to have procurement staff decide which sites to use on the basis of ease of negotiation or compatible liability clauses. One group should be advising the other, while the other actually decides. Now let’s see, should that be the outsourcing department or the medical director? Professional procurement was intended to improve clinical operations management skills, but lawyers don’t manage trials and they don’t improve CRO relationships—although they can make them worse. The inappropriate empowerment of legal-oriented staff has introduced consistent and costly program delays; their issues can be an enormous time sink. Even the value of standardization is diluted by a level of

x

64    Monitor august 2012

inflexibility that is now getting worse and worse in sponsor contracting. Make no mistake: Poor contracting experiences lead to poor vendor/site performance. This is strongly aggravated by the now common gamesmanship of sponsors delaying payments, and poor payment performance starts site performance on a downward spiral. What was supposed to help is now aggravating and delaying the work; and the work, lest we forget, is drug development, not shiny and polished contracts. It is about good conduct practice, not good contract practice. It is, in fact, about people, not bid grids and invoice templates. There is an assumption that contracting staff are better, indeed necessary, to create good arrangements with CROs. The thinking is that research doctors are not good managers, that individual project managers will serve only their self-interest and not the company’s, and that the art of contracting is so arcane as to be unteachable. I think this assumption is abetted by both parties: Study managers are happy to have one less thing to do and procurement staff want to perpetuate an aura of indispensability, as anyone does. Regardless of whether these are accurate assumptions, what was the purpose of this assumption? What was the problem we were trying to solve, and did we solve it, or did we solve something and create another problem? The solution has grown out of proportion to the problem.

Contracting per se does not lower risk; people do. Going back to our earlier example, one of the assertions of procurement indispensability is risk aversion—that we have professionalized risk management in these staff and so that is where protection lies. Generally speaking, risk aversion in this particular area of research is overkill. Site and CRO per-

formance is not dictated by contract terms, but rather only by the skill with which clinical operations managers (people, not paper) learn how to monitor, measure, communicate, and hold accountable their service providers. Contracting per se does not lower risk; people do.

This advisory role, instead of a controlling role, is where the legal profession was always meant to be. I remember how, long ago, my first corporate lawyer wisely advised me that if I ever actually needed to use one of his contracts to confront a business problem, I had already lost half the battle.

When Good Things Happen to Bad Ideas Here’s a way to change the suboptimal status quo for the better: We need to vastly reduce the procurement, contracting, and outsourcing infrastructure in clinical research and instead rely on a handful of expert advisors who work with those who should be in charge (i.e., the clinical research managers). In this advisory role, procurement and legal experts can advise on the topics of good contracting, standardized terms, possible volume discounts, metrics, and penalties. This advisory role, instead of a controlling role, is where the legal profession was always meant to be. It will keep us focused on executing clinical trials instead of executing lawyers, as Shakespeare would have it, while improving the relationships among the essential parties in conducting clinical research.  Ronald S. Waife is president of Waife & Associates, Inc. (www.waife.com), and can be reached at ronwaife@ waife.com.

Terri P. Kelly, RN, MSQA, CCRA, CQA

C o l u m n s

QA Q&A Corner

CRO Conundrums and Access to Electronic Systems Qour regulatory affairs department I have been having a debate with

about what information needs to be submitted to the Food and Drug Administration (FDA) under our Investigational New Drug (IND) application, specifically about contract research organizations (CROs) and vendors. Would you please clarify?

A ducted under an IND applicaFor pharmaceutical studies con-

tion, question 13 on the form FDA 1571 asks, “Is any part of the clinical study to be conducted by a [CRO]?” If the response is YES, the agency wants to know if any sponsor obligations will be transferred to the CRO. If that response is YES, the FDA wants you to attach a statement with the CRO’s name, address, and a list of the responsibilities transferred. The confusion arises with the interpretation of the meaning of a CRO. Most people, when referring to a CRO, think of a company that provides services from monitoring and project management to a multitude of “onestop shopping” activities. However, in the Code of Federal Regulations (CFR), 21 CFR Part 312.3 defines a CRO as “a person that assumes, as an independent contractor with the sponsor, one To submit a question, contact Terri Kelly at [email protected]

or more of the obligations of a sponsor, e.g., design of a protocol, selection or monitoring of investigations, evaluation of reports, and preparation of materials to be submitted to the [FDA].” Thus, any party (person or company) that contracts with a sponsor to provide clinical research services, where these services affect the information submitted to the FDA, should be included in that IND listing. This would include everyone from the company that is contracted to manufacture, package, label, and distribute the study drug to the clinical or bioanalytical laboratory, interactive voice response service, statistical support, central imaging; you get the gist. Many companies are including every vendor to make the process simpler, rather than attempting to determine whether a contracted service actually qualifies as a transferred sponsor obligation. Research is complicated enough; I am a firm believer in making things as simple as possible.

Q

I work at a very small biotech company that outsources almost every service for a trial. If we have contracted with a company for its expertise in providing a service to us, are we, as the sponsor, still responsible, or are they? Why does it seem that the CRO can take advantage of small companies and charge for more services that we

thought were in the contract from the beginning?

A to the research industry, and the

CROs provide valuable service

following statements do not mean to negatively implicate all CROs. However, you have actually hit on a pet peeve of mine. If a “very small” company contracts with a large global CRO that has promoted itself in the “dogand-pony-show” bid defense presentation as being an expert in, let’s hypothetically say pharmacovigilance, and that contract or service order stipulates transfer of responsibility for this service, you would think the CRO would be responsible. Obviously, the sponsor would retain ultimate responsibility for the conduct of the study, and oversight of the CRO by the sponsor would need to be readily apparent and documented (regular meetings, correspondence, etc.). Suppose an auditor discovers that the CRO has been reporting serious unexpected suspected adverse reactions (SUSARs) to the Indian regulatory authority (the Drugs Controller General of India, or DCGI) 15 calendar days following the Manufacturer’s Receipt Date, which in this case is the date that the CRO received the serious adverse event report, to coincide with other countries’ regulatory agency timelines. However, the DCGI requires reporting within 14 calendar

x

QA Q&A Corner    65

days. When the auditor questions the CRO about this issue, it responds that this requirement wasn’t specified in the contract, or the sponsor did not specifically request this. Keep in mind that the CRO is supposed to know what these requirements are. The CRO further responds that it can make special allowances to do this, but it will need to be added to the budget. Let’s also say that the auditor discovers that none of the SUSARs were submitted to the German regulatory authority, even though Germany was included as one of the countries specified in the contract and the CRO was informed of the date the Clinical Trial Agreement was approved in Germany. It just apparently fell through the cracks. Stuff happens; I understand this. However, when the auditor discusses this with the CRO, it responds that it can certainly submit the bulk package of SUSARs to Germany, but this will be outside of the scope of the contract, since it wasn’t sent at the

same time as the original submission of each SUSAR. So, the budget will need to be updated. This is where my pet peeve comes in. If the CRO had done its job correctly in the first place, this would have been included in the budget. However, since it now has to clean up its own mistakes, it charges the sponsor extra. I know we are all in business to make a living, but we also need to remember that we work in an industry based on ethics. Contracts, service orders, and budgets should be thoroughly reviewed to confirm that all applicable services and responsibilities have been documented in detail.

Q

Our company has all of its documents (Trial Master File, training files, standard operating procedures) stored in electronic format. During an inspection, do we have to provide actual access to the electronic systems, or do we have to print out all the documents?

A the voluminous amount of trees

I have always been dismayed with

that are killed by the research industry, and I commend any company or individual that attempts to curtail this by converting to electronic systems. Your responsibility during an inspection is to allow the inspector access to any records and reports related to the clinical investigation. The manner in which you provide access is up to you. In my experience, an inspector will not request actual access to the electronic system. Typically, he or she requests the documents, unless there is another reason for inspecting the system itself. Please keep the questions coming!  Terri P. Kelly, RN, MSQA, CCRA, CQA, is president and principal consultant GCP compliance auditor at Achieve Quality, Inc., a provider of international auditing, gap analyses, regulatory authority inspection facilitation, and training in GCP for the pharmaceutical, medical device, and biotechnology industries. She has more than 25 years of experience in clinical research, with 22 years as a consultant. She has presented internationally for private industry and at global conferences, and she can be reached at [email protected].

Committee Nominations Are Open

VOLUNTEER

Deadline: August 15, 2012

ACRP Wants You!

Submit an Online Nomination Form Today

www.acrpnet.org/committees

x

66    Monitor august 2012

Brent Ibata, PhD, JD, MPH, RAC, CCRC

C o l u m n s

Research Compliance

Risk-Based Integrated Quality Management and ISO 9001

“R Without 100% source document verification, sites will need to integrate quality management into processes rather than relying on monitors to identify and query questionable data.

isk” is defined as a function of the probability of harm combined with the severity of harm. In August 2011, the U.S. Food and Drug Administration (FDA) released a draft guidance document titled “Oversight of Clinical Investigations—A Risk-Based Approach to Monitoring.” This draft document represents a significant departure from the historic process of 100% source document verification. Without 100% source document verification, sites will need to integrate quality management into processes rather than relying on monitors to identify and query questionable data. Fortunately, for clinical research within the four walls of a hospital, quality assessment and performance improvement programs already exist. For example, hospitals accredited by DNV Healthcare must have programs that meet the International Organization for Standardization’s (ISO’s) ISO 9001 quality compliance standards, which will help these institutions to prepare for a risk-based approach to monitoring (more about this later).

Quality Assessment and Performance Improvement Clinical research professionals within a hospital are subject to additional regulatory compliance mandates, including the Medicare Conditions of Participation codified in the Code of Federal Regulations (CFR) at 21 CFR Part 482 (Medicare CoPs). One of the Medicare

CoPs is the requirement for a quality assessment and performance improvement program. 21 CFR 482.21 provides that “all hospital departments” must participate in an “effective, ongoing, hospital-wide, data-driven quality assessment and performance improvement program.” This includes services furnished under contract or arrangement. There is no regulatory carve-out for clinical research. Therefore, clinical research activities within a hospital that participates in Medicare would fall under the umbrella of services that could be expected to implement a quality assessment performance improvement program. The International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) and the FDA have published a guidance document on “Quality Risk Management” (ICH Q9),1 which provides a good foundation for a hospital-based clinical research program that intends to institute a clinical research performance improvement program. The two primary principals of quality risk management described in ICH Q9 are: 1. The evaluation of the risk to quality should be based on scientific knowledge and ultimately link to the protection of the patient; and 2. The level of effort, formality, and documentation of the quality risk management process should be commensurate with the level of risk.

x

Research Compliance    67

Clinical research activities within a hospital that participates in Medicare would fall under the umbrella of services that could be expected to implement a quality assessment performance improvement program. The ICH “Good Clinical Practice: Consolidated Guidance” differentiates monitoring (ICH E6 5.18) from auditing (ICH E6 5.19) and defines auditing as [The] systematic and independent examination of trial-related activities and documents to determine whether the evaluated trial-related activities were conducted, and the data were recorded, analyzed, and accurately reported according to the protocol, sponsor’s standard operating procedures (SOPs), good clinical practice (GCP), and the applicable regulatory requirement(s).

Hospital Accreditation Section 1865(a) of the Social Security Act provides that the Centers for Medicare & Medicaid Services (CMS) may recognize national accreditation organizations as accrediting a hospital to have satisfied the Medicare CoPs. Currently, there are three CMS-approved hospital accreditation organizations: ●● the

American Osteopathic Association/Healthcare Facilities Accreditation Program; ●● DNV Healthcare; and ●● the Joint Commission.2 Only DNV Healthcare is aligned with ISO 9001, and this alignment “may strengthen the link between regulatory compliance and quality improvement.”3 The first section of the DNV National Integrated Accreditation for Healthcare Organizations (NIAHO®) requirements calls for a quality management system that is compliant with the ISO 9001 standard within three years of initial

x

68    Monitor august 2012

NIAHO accreditation.4 The NIAHOcompliant quality management system has the following characteristics: ●● QM.2

SR.3c—The organization conducts internal reviews of its processes and resultant corrective or preventive action measures have been implemented and verified to be effective. ●● QM.6—In establishing the quality management system, the organization shall be required to have the following as a part of this system: 44 Goal Measurement/Prioritization of Activities (SR.5); 44 Focus on high-risk, problemprone areas, processes, or functions (SR.5a); 44 Consider the incidence, prevalence, and severity of problems in these areas, processes, or functions (SR.5b); 44 Affect health outcomes, improve patient safety, and quality of care (SR.5c). The Joint Commission has a similar chapter on Performance Improvement in its “Hospital Accreditation Standards.”5 Additionally, there are Joint Commission standards related to clinical research (RI.01.03.05), investigational medications (MM.06.01.05), and specific elements of performance related to hospital review of research (RI.01.03.05 EP1) and informed consent (RI.01.03.05 EP2-9).

Conclusion With the trend toward a risk-based approach to monitoring, hospitalbased clinical researchers will be motivated to implement risk-based internal

reviews of their processes and direct the attention of their institutions’ quality programs to high-risk, problemprone areas. ICH Q9 provides a good overview of the typical quality risk management process (risk assessment, risk control, and risk review) and ICH Q9 aligns well with ISO 9001. Regardless of which national accreditation organization a hospital chooses to use, each must survey for compliance with Medicare CoPs. One of the Conditions of Participation is the requirement for a “hospital-wide, data-driven quality assessment and performance improvement program.” ISO 9001 is an effective international standard into which a hospital-based research program could scaffold an integrated quality management program.

References 1. U.S. FDA. Guidance for Industry – Q9 Quality Risk Management. Available at www.fda .gov/downloads/Drugs/GuidanceCompliance RegulatoryInformation/Guidances/ucm073511 .pdf Accessed June 10, 2012. 2. CMS-Approved Accreditation Organization Contact Information. Available at: https:// www.cms.gov/Medicare/Provider-Enrollment -and-Certification/SurveyCertificationGenInfo /downloads/AOContactInformation.pdf Accessed June 10, 2012. 3. Vallejo BC, Flies LA, Fine DJ. 2011. A comparison of hospital accreditation programs. Journal of Clinical Engineering, January/ March 2011, p. 32. 4. DNV Healthcare. 2012. National Integrated Accreditation for Healthcare Organizations (NIAHO®) Interpretive Guidelines and Surveyor Guidance. Available at: http://dnvaccreditation .com. 5. The Joint Commission. 2011. Hospital Accreditation Standards.  Brent Ibata, PhD, JD, MPH, RAC, CCRC, is the director of operations at the Sentara Cardiovascular Research Institute, and teaches for the online Masters of Clinical Research Administration Program through the University of Liverpool and Master of Science in Regulatory Affairs at Northeastern University. Previously, he was the site director at Four Rivers Clinical Research, and an assistant professor in the Division of Neurosurgery and an IRB member at Saint Louis University. He holds a certificate in health law from Saint Louis University’s School of Law. For ACRP, he is a past member of the Global CCRC Exam Committee and a current member of the Association Board of Trustees. He can be reached at [email protected].

Morgean Hirt, ACA

A s s o c i a t i o n

Certification

I Presenting an overview and summary of certification program activities in 2011.

n 2011, a total of 2,072 individuals applied to the Academy of Clinical Research Professionals (the Academy) for certification as a clinical research coordinator (CRC), clinical research associate (CRA), or physician investigator (PI). All told, 1,855 of these applicants tested at a computer-based testing location in one of more than 30 countries, resulting in 1,343 individuals being granted initial certification. This brought the total number of individuals currently certified by the Academy to 13,166. Table 1 statistics provide an overview of activity in each certification program in 2011.

Explanation of Terms Total Scored Items on Exam: The current examination length is 125 questions; however, 25 of those are pre-test questions being evaluated for possible “official” inclusion in an exam at a later date. Information on how candidates perform on those questions is being collected without the question affecting a candidate’s score. Total Scaled Score Possible: This is the highest score possible and represents a perfect score. The Academy uses a scaled score; for 2011, the scale ran from 1 to 99. Scaled scoring is a common practice among certification

N e w s

2011 Academy Examination Annual Report programs with multiple forms of their exams. Scaled Passing Point: This is the score a candidate must receive in order to pass the examination. The passing score is a scaled score of 70, which correlates to the number of questions a candidate must answer correctly, based on which form of the exam the candidate is taking. That number can differ from form to form. The number of questions that must be answered correctly is determined by evaluating the overall difficulty level of the exam. The more difficult the exam, the fewer questions a candidate would have to answer correctly in order to be assessed as having mastered the content. The Academy uses the Modified Angoff method to establish the passing point. The exam is not scored on a “curve,” and there is no predetermined number or percentage of candidates that will pass. The passing score is set for each form of the exam. Scaled Mean Score: This is the average score of all candidates who took a specific form of the exam during 2011. A passing score is 70. The Academy uses criterion-referenced scoring, meaning that the passing point is set for the first exam and does not change between exam administrations. The exam is not scored on a curve, so it

x

CERtification    69

Table 1  Statistical Overview of All Three Certification Areas in 2011 Certification Test Data

CRCs

CRAs

PIs

Total number of applicants

1,250

723

90

Total number of eligible candidates

1,206

656

77

Total number of test takers

1,150

646

59

Form number

8730030401

8730030501

8730060401

8730060501

8730050401

Total number of candidates tested on form

541

609

354

292

59

% of candidates passing

76.0%

73.3%

72.6%

67.3%

81.9%

Total scored items on exam

100

100

100

100

100

Total scaled score possible

99

99

99

99

99

Scaled passing point

70

70

70

70

70

Scaled mean score of candidates

75.76

76.5

79.83

79.69

76.51

Range of scaled scores

36–95

38–96

17–98

36–96

44–96

KR-20 reliability coefficient

0.90

0.90

0.91

0.90

0.88

Standard error of measurement

3.87

3.88

3.69

3.74

4.12

does not matter how many candidates test or what the make-up of the testing group is. The Academy does not predetermine the number of people it will certify in a given year; performance, and therefore passing the exam, is up to the candidate. Range of Scores: This shows the range of candidate scores between the lowest scoring candidate and the highest. The Academy’s exams demonstrate a fairly typical range for a professional certification program. No one has scored a 25, which would indicate the candidate guessed at all the questions; nor has anyone achieved a perfect score. This indicates that the difficulty level of the

x

70    Monitor august 2012

exam is generally appropriate for the candidate population at large. KR-20 Reliability Coefficient: This is one of the most important program statistics. It indicates the reliability and consistency of the exam, or how well the exam distinguishes between candidates who have mastered the content covered and those who have not yet mastered it. Professional certification programs strive for KR-20 results of 0.80 or higher. The statistics for the Academy’s exams indicate a high level of reliability, which means that the exam is quite dependable for assessing a candidate’s mastery of the information tested.

Standard Error of Measurement: The standard error of measurement yields an estimate of the average amount of error associated with a test score received on an Academy exam. A large standard of error indicates a significant amount of error can exist in the score received. The small standard of error for the Academy exams suggests that the Academy exams are a precise measure of the knowledge and skill required to carry out the role-specific duties of a CRC, CRA, or PI. Each designation’s exam is overseen by a Global Exam Committee made up of currently certified clinical research professionals who work with a professional testing agency to develop the exam. The exams for each program (CCRC®, CCRA®, CPI®) go through several review steps by each Global Exam Committee before being released for administration. In addition, the passing standard is set by a diverse panel of certified clinical research professionals. Currently, there are 56 active subject matter experts who are Academy certificants participating in special training on how to write multiple-choice test questions. Training is offered each year at ACRP’s Global Conference. Every draft test question must be linked to a topic area found on the Detailed Content Outline for a specific designation, as the outlines are specific to each job role. Each question must also be referenced directly to a specific section of the International Conference on Harmonization Guidelines E2A, E6, E8, or E9. On average, 75% of draft items are approved for addition to the item pool as a pre-test item. In 2011, 119 certified clinical research professionals directly contributed to the development of the Academy’s certification exams.  Morgean Hirt, ACA, is the director of certification for ACRP. She can be reached at [email protected].

March 2012 Exams

A s s o c i a t i o n

Certification

CPI® Raymond Louis Benza, MD, CPI, Allegheny General Hospital Tshekedi Dennis, MD, CPI, CNS Network, Inc Mohamed El-Shahawy, MD, MPH, CPI, Academic Medical Research Institute Ira Friedlander, MD, CPI, Cardiovascular Consultants, Inc Wael A Harb, MD, CPI, Horizon Research Mark D Heiman, MD, CPI, Cardiology Associates of Fairfield County David Andrew Hinchman, MD, CPI, St Luke’s Regional Medical Center Michael Tzuoh-Liang Hong, MD, CPI, Nea Baptist Clinic Amy Hui-Chen Kao, MD, MPH, CPI David U Lipsitz, MD, FACS, CPI, NorthEast Urology Research Richard R Lotenfoe, MD, CPI, Discovery Clinical Trials Brian MacGillivray, MD, CPI HaeJung Huh Marr, MD, CPI, Covance Clinical Research Michael Joseph McCartney, MD, CPI, ActivMed Practices and Research Thomas Edward Murtaugh, MD, CPI, Covance Petros Nikolinakos, MD, CPI, Northeast Georgia Cancer Care Tamara Janel Nix, MD, CPI, Pediatrics and Adolescent Medicine David Packham, MD, CPI, Melbourne Renal Research Group Howard John Quint, MD, CPI, Comprehensive Clinical Development Marilyn A Roderick, MD, CPI, High Point Clinical Trials Center Muhammad Shakeel, MD, CPI, Nephrology and Internal Medicine of Anderson Sanjiv Sharma, MD, CPI, Memory Enhancement Center of New Jersey Nitun Verma, MD, CPI, Washington Township Center for Sleep Disorders

CCRA® Bonnie C Abbruzzese, MS, RD, CCRA, Neoprobe Corporation

Lijjo G Abraham, CCRA Bernadette Ahern, CCRA, Duke Clinical Research Institute Kishor Ananda Ahire, CCRA, PPD Jalila Abdel-Jalil Amr, PharmD, CCRA, Clinserv International Teresa Atwood, CCRA, Duke Clinical Research Institute Mayssa Badour, CCRA, ClinServ International Natalie Ballard, CCRA, Novartis Pharmaceuticals Tina Searcy Barrett, CCRA, Reata Pharmaceuticals Natalie A Bascom, CCRA, Becton Dickinson Cindy Becerril, BD, CCRA, CECYPE Donald Mukete Betah, CCRA, Quintiles Kurt G Bischoff, MS, CCRA, Quintiles Joannie Blanchette, MSc, CCRA, CHU Sainte-Justine Research Centre Jodi S Boarman, RN, CCRA, J Boarman Consulting Nick A Boisen, CCRA Megan Bower, CCRA, INC Research Robin L Bravo, CCRP, CCRA, CCS Associates Robert Broadus Broadwater, CCRA, INC Research Slawomir Brudniak, CCRA, Covance Jane Paula Brunette, CCRA, GlaxoSmithKline Sally Buelta, CCRA Ray F Burich, CCRA Erminia Buscaino, MS, CCRA, RPS Sharon E Califf, CCRA, Duke Clinical Research Institute Megan Cease, CCRA Viviana Cecinato, CCRP, CCRA, PPD Ruttiya Charoenchokpanit, CCRA, fhi360 Emily Chu, RN, CCRA, AstraZeneca Megan Rose Combs, BS, CCRA, BIOTRONIK Rykae Cooper, CCRA, Duke Clinical Research Christine Cornwell, CCRA, Genentech Kimberly A Cotten-West, CCRA, Cotten Enterprises Amanda Marie Dahl, CCRA, L’Oreal USA Betty J Dean, MBA, MLT, CCRA, WL Gore & Associates Sabine Demeester, MSc, CCRA, genae associates nv Dominique Denter-Erlenbach, CCRA, SocraTec R&D

N e w s

ACRP Certifies 599 Clinical Research Professionals Congratulations to all the exam candidates who passed the March 2012 Certification Examinations. By attaining the designations CPI® (23), CCRA® (196), and CCRC® (380), these clinical research professionals have demonstrated their commitment to competence, the advancement of knowledge, and the highest scientific standards in their efforts to improve the quality of life through clinical trials. Nele Dervaux, MSc, CCRA, genae associates nv Crystal Donnelly, RN, MSN, CCRA, Pfizer Mary Krusi Dyke D’Rozario, CCRP, CCRA, Quintiles Dale Dubach, CCRA Amy Lynn Dwyer, CCRA, Duke Clinical Research Institute Dale Mary Eadie, RN, CCRA, Fresenius Medical Services Angie Edwards, CCRA, Beltas Laura Ely, CCRA, Dexcom Margaret Rosemary Evans, CCRA, Evans Clinical Consulting Maire Fenton, CCRA, Palm Beach CRO Kathryn Ann France, BA, RN, PHN, CCRC, CCRA, University of Minnesota Medical School Christopher Fromm, CCRA, Quintiles Lyndsey Garritty, BA, CCRA, Canadian VIGOUR Centre Amanda Peterson Gervais, BA, CCRA Kristina Gibbens, CCRA, Medpace Medical Device Ryan Daniel Gladney, CCRA, HLT Lauren Gliko, CCRA, Quintiles Thalia Gooden, CCRA Shuji Goto, CCRA Wendy Michelle Graf, BSN, CCRA, Graf Monitoring Angelica Graves, BS, CCRA, Neutrogena Jessica Lauren Greene, CCRA, PFM Medical Cynthia M Gross, RN, CCRA, Warner Chilcott Karen Gurevich, MHA, CCRA Colleen Haberman, CCRA Nadya Anne Haljkevic, CCRA, INC Research

Sara Jean Hallowell, MS, CCRA, Alere Janet Hargett, MS, CCRA Whitney Hart, CCRA Kristine E Hartman, CCRA, Johnson & Johnson Allison G Harty, CCRA, Merck & Co Jennifer Hasenei, CCRA, DSP Clinical Research Hisham Hassan, MD, CCRA, PPD Christina Hawley, CCRA, Stryker Orthopaedics Shelley Ann Haybeck, CCRA, Covidien Helen E Hayes, CCRA, Alere Marilyn Hendrickx, CCRA, genae associates nv LiJen Heng, CCRA, PAREXEL Brandi Henley, CCRA, INC Research Dahlia Henry, CCRA, Forest Laboratories Anita Louise Hepditch, RN, CCRA, Duke Clinical Research Institute Mouhamad Ali Hijazi, CCRA, ClinServ International Elena Hoskin, MPH, CCRA, Population Council Showkat Hossain, PhD, CCRC, CCRA, Advent Clinical Research Centers Dionne Howe, CCRA, Fred Hutchinson Cancer Research Center Kate Huffman, CCRA, University of Michigan Nolinne Keo Humphreys, LPN, CCRA, Galderma Research and Development Shannon Ivey-Morgan, CCRA Lindsey Jacobson, MS, CCRA, AVEO Pharmaceuticals Darcy Lynn Johnson, CCRA, INC Research Andrea Johnston, CCRA Natasha Jonassaint, CCRA, Olympus Andrea M Jones, CCRA

x

CERTIFICATION    71

Certification Registry

www.avectraacrp.com/certlist

ACRP and APCR members and the public can verify the certification status of Academy-certified clinical research professionals through the ACRP online Certification Registry. To search the Registry database, simply enter the first and last name. Additional fields—middle name or initial, suffix, city, state, province, or country—can be entered to narrow the search. The Certification Registry database will then provide the individual’s certification type(s): c CCRA®, CCRC®, CCTI®, and/or CPI® c initial certification date(s) c current certification expiration date(s)

Note that some Academy-certified certificants opt-out of inclusion on the Registry list. For additional information on certification, go to www.acrpnet.org or www.apcrnet.org. Jenine Jones, RN, BSN, MHA, CCRA, IMARC Research Janhvi Bhadresh Joshi, CCRA, Quintiles Jan M Keen, RN, BSN, CCRA Siba Kikanovic, CCRA, Allergan Nicole K Kilburn, BS, CCRA, Boston Scientific Susan K Kimmel, BSN, MSN, CCRA Tracey Kisly, CCRA, Takeda Pharmaceuticals Cathy Vastine Knisley, MD, MS, CCRA, WL Gore & Associates Krisilda Lawrence Krishans, Sr, CCRA, George Institute for Global Health Jiao Kuang, CCRA, Gamma-Dynacare Medical Laboratories Darren J LaCour, Jr, MS, CCRA, GluMetrics Scarlet LaFever, CCRA, Spherix Marvin Lau, CCRA, Abbott Medical Optics Yun Jie Lau, CCRA, PAREXEL Dee Lee, CCRA Vaudreca Elaine Lee, MS, CCRA, Quintiles Nicole Hall Leedom, CCRA, United Therapeutics Dianne Hall Leloudis, RN, MSN, CCRA, Duke Clinical Research Institute Sara Leone, CCRA, ONO Pharma USA Donglan Li, CCRA, INC Research Lori B Lickstein, CCRA Jonathan Thomas Likowski, BS, CCRA, Atlantic Research Group Sarah Shan-Erh Lin, CCRA, Astellas Pharma Taiwan Lucy Lu, CCRA Megan Magerkurth, CCRA, ICON Clinical Research Ann M Marseilles, CCRA Tracy Matarazzo, CCRA, Quintiles Rebecca Maxhimer, BA, CCRA Melissa A McClafferty, RN, BSN, CCRC, CCRA, Kforce Clinical Research Christina McGee, MS, CCRC, CCRA Alissa McGuire, CCRA, INC Research Tal Meir, CCRA Tejal Dattatraya Mejari, CCRA, PPD Carol Marie Miller, CCRA Victoria Millward, CCRA Isabela Morales-Sierra, MD, CCRP, CCRA, University of Miami Nadeem Mukhtar, CCRC, CCRA, Lucile Packard Children’s Hospital Halina Nawrocki, CCRA, Canadian VIGOUR Centre David Nichols, CCRA, Novartis

x

72    Monitor august 2012

Keithryn Nicolas, CCRA, Actelion Pharmaceuticals US Laura E Olson, CCRA Deanna Orsi, MSc, CCRA Darlene Pagano, CCRA Anne Parker, RN, MBA, CCRA, Kforce Clinical Research Kushan Dhirajlal Patel, CCRA, Synchron Research Services Ajit Patil, CCRA, Piramal Life Sciences Russell Blain Pawlowski, CCRA, INC Research Christie Pennington, CCRA, Ventana Medical Systems Kathryn S Petti, RN, CCRA, KS Petti Consulting Denise Plouffe, CCRA, PPD Miguel Posada, MS, CCRA, Juno Research Pamela Pounds, CCRA, PharmaNet/i3 Amol Arvind Prabhudesai, Sr, CCRA, IndiPharm Sarah Prodan, MS, RAC, CCRA, Synteract Jane Ramerth, RN, BSN, CCRA, NATO Maintenance and Supply Agency Sarah Ramey, CCRA, Duke Clinical Research Institute Alejandra Ramirez Santiago, MD, CCRA, Palyon Medical Corporation Franklin Joseph Raspa, II, RN, CCRA, Mylan Pharmaceuticals Sejal Raval, CCRA, Medpace Medical Device Jennifer Collins Reinhard, CCRA, Duke Clinical Research Institute Juan Reyes, DVM, MS, CCRA Dustin B Ritter, CCRA Wanda P Robinson, BSN, PHN, CCRA, VertiFlex Nidia Y Rosado, CCRA, Duke Clinical Research Institute Daisuke Saeki, CCRA, Abbott Japan Steven Kenji Sakata, CCRA Koichi Sato, CCRA, Mundipharma KK Kim Seon Hee, RN, CCRA, Samsung Medical Center Anjali K Simh, CCRA Ashley D Smith, MSHS, CCRA Nadine Lee Smith, RN, CCRA, Flinders Clinical Research Mary Patricia Sprague, RN, CCRA, Oracle Clinical Research Ryan Stults, CCRA, Duke Clinical Research Institute

Nancy Lynn Sullivan, RN, CCRA Carla Terrill, RN, CCRA, Medtronic Manuel Arun Thangaraj Rathinasami, CCRA Tracey Thomas, CCRA, Yale University Miriam Thune, BS, MS, CCRA Lesley Ann Tizzard, RNBN, CCRA, Oncolytics Biotech Rebecca Susan Tjin, CCRA, Sticares InterACT Kristen Tobola, CCRA Celestina Sally Touchet, BS, BA, MBA, CCRA, INC Research Purav Trivedi, CCRA, Cliantha Research Stephanie Vallee, CCRA Nicolaas Petrus Lambertus van Laak, MSc, CCRA, Janssen-Cilag Magalie Van Tongel, CCRA Betsy S Varghese, MA, CCRA, SUNY Upstate Medical University Berta Villegas, CCRA Suzanne Marie Vogel, MPH, CCRA, Alere Dawn M Wagner, CCRA Amber Wall, CCRA Lisa D Wallerstein, CCRC, CCRA, International Genomics Consortium Zhaochuan Wang, CCRA Prieya Wason, CCRA, Actelion Pharmaceuticals Tammy J Watts, RN, MSH, CCRA Ashley Wegel, MEd, CCRA, The EMMES Corporation Kathleen Weyns, MSc, CCRA, genae associates nv Sarah R Wibben, CCRA, Medtronic, Inc Vanessa N Windham, CCRA, Windham Global Research Consultants LLC Jacqueline Woodruff, CCRA, Johnson & Johnson Wan-Jung Wu, CCRA, Formosa Biomedical Technology Corp Lan Yang, CCRA, Nordic Bioscience (Beijing) So Jeong You, RN, CCRA Nathifa Young, MT(ASCP), MBA, CCRA, Abbott Laboratories Karina Zarins, CCRA Caitlin Zellner, CCRA, Duke Clinical Research Institute

CCRC® Beatriz Acevedo, RA, CCRC, Mount Sinai Medical Center

Susan Adu-Amankwah, CCRC, Noguchi Memorial Institute For Medical Research Shannon Albertson, CMA, CCRC, Rocky Mountain Diabetes and Osteoporosis Center Joy S Alex, CCRC, University of Rochester Melissa Jo Alexander, RN, CCRC, Florida Hospital Rana Fathi Al-Jaouni, CCRA, CCRC Tori Ann Allen, BS, CCRC, Community Clinical Research Center Subhashini Allu, CCRC, Northwestern University Debbie Amendolare, LPN, CCRC Denise Carol Ammons, CCRC, Western Carolina Retinal Associates Olga I Ananina, CCRC, Dent Neurologic Institute Victoria Anderson, MPH, CCRC, Denver Health Rocky Mountain Poison & Drug Center Jennifer Andoh, BA, CCRC, Southern Illinois University School of Medicine Kristine Arges, RN, CCRC, Duke University Hospital Valeria Viviana Arnaudo, BA, CCRC, TKL Research Timothy Joseph Babin, CCRC, Wheaton Franciscan HealthCare Mary Elizabeth Baker, CCRC Haley Bakies, BA, CCRC, RemingtonDavis Laura Kathleen Bales, RN, OCN, CCRC Katrina Bandong, CCRC, Weill Cornell Medical College Julie Anne Barenholtz, MSW, CCRC, New England Research Institutes Joyce B Barmen, CCRC, Rapid Medical Research Kimberly Stapleton Barnette, CCRC, University of Florida Jacksonville Healthcare Tara Barrineau, CCRC, Levine Cancer Institute Thea Barsalou, RN, CCRC Jessica Bartlett, CCRC, Clinical Research Advantage Claudia Bartos, RN, CCRC, University of Texas MD Anderson Cancer Center Jackie L Basham, CCRC, Columbus Center for Women’s Health Research Heather Leigh Baumhauer, CCRC, CompleWare Corporation Anna Renee Bays, LPN, CCRC, Holston Medical Group Katherine Ann Beattie, RN, CCRC, Minneapolis Heart Institution Foundation Jeremy Beatty, CCRC, Center for Allergy, Asthma & Immunology Karli Beaver, BS, CCRC, Remington-Davis Lea H Becker, MT(ASCP), CCRC, University of Virginia Health System Ellen Bedenko, CCRC Sandra Befera, RN, CCRC, Southcoast Hospitals Group Rachel Bennett, ARNP-C, CCRC, Clinical Trials & Research Center of Florida Julie Bishop, RN, BSN, CCRC, Spectrum Health Hospital Pamela Blackburn, CCRC, Cornea Consultants of Arizona Cathie Bloem, MPH, RN, CCRC Dudley Allen Boone, RPSGT, CCRC, SleepMed Terry Boord, PhD, CCRC, Alamo Medical Research Traci Borns, CCRC Kira Botkin, CCRC, Grant Medical Center Katie Bowen, LPN, CCRC

Mary Gunn Boyle, RN, MSN, CCRC Diane Lynne Branham, RN, CCRC, Colorado Clinical Translational Research Center Erin Brennan, CCRA, CCRC, Lenox Hill Hospital Ann Brenner, CCRC, Via Christi Research Susan Stephanie Brietigam, BA, CCRC, Northwestern University Jae L Brimhall, RN, CCRC, Avail Clinical Research Gabrialle Browning, LPN, CCRC, Rocky Mountain Diabetes Shelly Brunk, RN, CCRC, University of Virginia Beverly Bryan, BA, CCRC, Emory University Dawn D Bryant, LVN, CCRC, Discovery Clinical Trials Patricia A Burks, RN, MA, CCRC, Washington University School of Medicine Silas Bussmann, CCRC Melissa E Cagle, CCRC, Sun Research Institute Renay D Caldwell, CCRC, Emory University Christine Marie Callahan, RN, CCRC Meredith Capasse, CCRC Sarah Robyn Carbone, CCRC, Linear Clinical Research Marisol Castillo, SC, CCRC, Center For Clinical Studies Anna Liza Castro-Malek, BA, CCRC, University of Illinois at Chicago Fatiha Chabouni, CCRC, Weill Cornell Medical College Jill Maurine Chernin, RN, CCRC, DM Clinical Research Gina Christiansen, BS, CCRP, MHS, CCRC William Chrvala, CCRC, Mid Hudson Medical Research Gina Ciavarella, CCRC Jennifer Anne Cihigoyenetche, BS, CCRC, St Luke’s Intermountain Research Center Sarah R Clark, RN BSN, CCRC, OSF Saint Francis Medical Center Ashley Clayton, CCRC, Memory Enhancement Center of New Jersey Eric Clayton, MS, CCRC, Southeast Regional Research Group Carla Ann Cockerline, MSc, CCRC, Nutrasource Diagnostics Anitra Fawn Coicou, CCRC Wendyann Collins, MA, CCRC, Hamzavi Dermatology Carol H Connell, CCRC, TKL Research Nichole Cope, CCRC, Clinical Research Institute Melisa Celaya Cortes, MA, CCRC, Scottsdale Healthcare Pamelo E Costales, Jr, CCRC, PAREXEL Cheryl Frances Crabtree, RN, CCRC, Ohio Health Research Institute Jennifer Marie Creasor, RN, CCRC, Hamzavi Dermatology James A Crump, III, LVN, AS, CCRC, Cardiovascular Associates of East Texas Marina Cruz, CCRC Makeda Culley, CCRC, North Shore Long Island Jewish Health System Rebecca Lynn Dahme, RN, CCRC, Aurora Health Care Ian Dalangin, CCRC, Lenox Hill Hospital Kavitha Damal, PhD, CCRC, University of Utah Kaitlyn S Daniels, BSN, RN, CCRC, The Children’s Hospital of Philadelphia Amy Gayle Davidson, RN, CCRC, Heart Center Research

Amy Leigh Davis, DBA, MBA, CCRC Betty deBettencourt, CCRC, Fogarty Institute Steven Patrick DeMartino, CRTT, RPFT, AEC, CCRC, Anne Arundel Health System Research Institute Amie Demming, CCRC, HCCA Clinical Research Solutions Amanda G Donoho, LPN, CCRC, Holston Medical Group Clinical Research Karen Dorman, RN, MS, CCRC Angelique Dozier, CCRC, Sneeze, Wheeze & Itch Associates Wendy Drewes, BSN, RN, CCRC, Winthrop University Hospital Liesel O Dudek, RN, OCN, CCRC, The Cancer Institute of New Jersey Rita Kimberly Duke, CCRC, University of Florida Cancer Center Dixie Durham, MHS, RRT-NPS, CCRC, St Luke’s Cystic Fibrosis Center of Idaho Jennifer Anderson Dvorak, CCRC, Southwestern Regional Medical Center Sarah Dzigiel, CCRC Sara Edwards, MS, CCRC Laurie Ann Emmert, LPN, CCRC, Bradenton Research Center Susan Marie Engerman, BSN, RN, CCRC Barbara Enright, APN, CCRC, Children’s Specialized Hospital Sandra R Epps, CCRC, Innova Clinical Trials Jennifer Esaki, CCRC, Westlake Medical Research Tammy Facciolo, CCRC, Michael A Werner, MD Laura Falcone, APRN, CCRC, Meridian Clinical Research Anne Marie Fann, RN, CCRC, Saint Louis University Julia Marie Farquharson, BSc, CCRC, Mount Sinai Hospital Elongia Farrell, CCRC, Roswell Park Cancer Institute Cecilia G Felizarta, RN, CCRC, Franco Felizarta, MD Yenny Mae Fernando, CCRC, Alamo Medical Research JoAnn Filla-Taylor, BSN, RN, CCRC Gabriella Fini, CCRC Jane M Fischer, CCRC, The Cancer Institute of New Jersey Susan Ann Fore-Kosterski, RN, CCRC, Northwestern University Zohreh Forghani, MD, CCRC Claudia Fortiche, CCRC Lee Catherine Frizzle, RN, CCRC, Hamzavi Dermatology Rebecca J Frye, RN, CCRC, East Tennessee Center for Clinical Research Carla G Fuentes, CCRC, Aurora Health Care LeiAn Gainey, CCRC, Pediatric Neurology LuAnna Mae Garcia, CCRC, New Mexico Heart Institute Traci Gardina-Swim, CCRC, San Diego Clinical Trials

Stacey P Gates, RN, BSN, OCN, CCRC, National Cancer Institute Carol Ann Gelderman, RN, BSN, MS, CCRC, Northeast Georgia Medical Center Susan A Genell, CCRC, Northwestern University, Feinberg School of Medicine Nicole Genova, CCRC, Northeast Clinical Trials Group Bethany Giachetti, CCRC, Cancer Care Northwest Darlene P Gibson, RN, CCRC, Georgia Health Sciences University Marianne Reid Gildea, RN, CCRC, University of Utah Melissa S Ginn, CCRC, The Research Institute at Nationwide Children’s Hospital Erica Lynn Glaser, CCRC Christine Marie Goetz, CCRC, MultiCare Health System Jennifer Goodrich, RN, CCRC, Skin Search of Rochester Colin M Gorman, CCRC Marsha Gossett, CCRC, CU Pharmaceutical Research Buffi Green, CCRC, Black Hills Regional Eye Institute Sonya Greenwood, RN, BSN, OCN, CCRC William Keith Gryder, CCRC Dena Gustafson, BSN, CCRC, Aurora Cardiovascular Associates Oscar Guzman, CCRC Charlotte Hall, RN, CCRC, Linear Clinical Research Ashleigh Hannah, CCRC, University of Mississippi Medical Center Amy Hanson, CCRC, Twin Cities Spine Center Karen L Harmon, RN, CCRC, Essentia Institute of Rural Health Nicholas Lawrence Harris, CCRC, Indiana University School of Medicine Eugenia Louise Hatfield, LPN, CCRC, Center For Cardiovascular Research Erin E Hawks, CCRC Leona Mary Heffelfinger, CCRC Jennifer Ruth Hege, RN, CCRC, Children’s Hospital Colorado Miranda Heisterkamp, CCRC, ICON Development Solutions Andrew Barry Herndon, CCRC, Virginia Research Center Bridget Ho, BA, CCRC, United Hospital Allina Hospitals and Clinics Marcia Holladay, CCRC, Sentara Cardiac Research Institute Sarah Howard, RN, CCRC, Community Hospital Anderson David Corbin Hsi, BS, CCRC, Albuquerque Clinical Trials Bonnie L Hughes, RN, BSN, CCRC, Advocate Hope Children’s Hospital Mary Kassandra Hulsey, CCRC, Center for Advanced Research & Education Leslie Hutchinson, CCRC, Southeast Regional Research Group Linda Faye Imel, CCRC, Community Clinical Research Center Rajneesh Jayaram, CCRC, San Antonio Kidney Disease Center Physicians Group

Brian Jennings, CNMT, CCRC Wendy Ilene Jenvey, RN, BSN, CCRC, Pulmonary Associates of Richmond Sara Temiyasathit Jones, PhD, CCRC, VA Boston Healthcare System Toshiko Kammera, RN, CCRC Margaret H Karpink, RN, CCRC, AI duPont Hospital for Children Jessica Anne Kearney-Bryan, RN, BSN, CCRC, Carolinas Medical Center Jessica Keller, CCRC, UC Health Otolaryngology-Head and Neck Surgery Kimberly Joy Kelso, RN, CCRC, Alegent Health Kara Kenney, CCRC Karen Lynn Kernen, RN, BSN, CCRC, Kosair Charities Pediatric Clinical Research Unit Fatima Dilnaaz Khan, RN, CCRC, DuPage Medical Group Iram Khan, CCRC Amanda Kincaid, RN, BSN, CCRC, PriMed Clinical Research Megan L Kingdon, CCRC, Cincinnati Eye Institute Ingrid Angela Kissel, RN, CCRC, Christiana Care Health System Barbara Kleiber, CCRC, The Ohio State University Janet Lee Knilans, CCRC Anne Kopko, CCRC, The Ohio State University Medical Center Melissa Koschnitzke, MA, CCRC, Medical College of Wisconsin Alexandra Kowalski, BA, CCRP, CCRC, University of Louisville Catherine Kraft, CCRC Amy C Kreisler, CCRC, Quintiles Judy Jean Kroulek, RN, CCRC, Chattanooga Research & Medicine Linda G Kruleski, CCRA, CCRC Debra Kruser, RN, BSN, CCRC, University of Wisconsin-Madison Anna Kukulka, CCRC, University of Florida Yi Ting Kuo, RN, CCRC, National ChengKunh University Hospital Yasuko Kurata, RPh, CCRC Julie Lafave, CCRC Taryn Lynn Lapponese, BS, CCRC Vanessa Laroche, CCRC, Columbia University Mary Cowles Laszlo, CCRC, Emory University Marie Lawrence, CCRC, Allergy, Asthma & Sinus Center Julie Lawrence, CCRC, Desert Hematology Oncology Adelaida S Leal, CCRC, New Mexico Clinical Research & Osteoporosis Center Lisa LeTouzel, CCRC Yangmei Li, PhD, CCRC, St Michael’s Hospital Heather Littell, CCRC, Advanced Clinical Research Shan Liu, RN, CCRC, Piedmont Heart Institute Kristi Livermont, CCRC, Black Hills Regional Eye Institute Amy Lobner, MPH, CCRC

The CCRA® and CCRC® programs are accredited by the National Commission for Certifying Agencies (NCCA). Achieving NCCA accreditation is a significant milestone for ACRP and the clinical research profession. NCCA has certified hundreds of well known and highly respected certification programs and uses the highest standards of analysis and review to evaluate programs for accreditation.

x

Certification    73

Madonna Luna-Bautista, RN, CCRC Janet L Lung, RN, CCRC, Sterling Research Group John Michael Lybarger, BA, CCRC, The University of Florida Kristy Macci, RN, MSN, CCRC, Pfizer Darshan Mahida, MD, CCRC, DM Clinical Research Antoinette Mancini, CCRC, CHOP Newborn Care at Pennsylvania Hospital Lindsey Mann, BA, BS, CCRC, LSU Health Shreveport Psychiatry Michele Manrique, CCRC, University of Miami Kathleen Ann Mansell, RN, MSN, CCRC Jaime Marino, CCRC, APEX Medical Research Christine Marie Matelan, CCRC Nancy Stack Mather, ARNP-C, CCRC, Daytona Heart Group Yukiko Matsushima, CCRC, Keio University Faculty of Pharmacy Lindsay Mattino, RN, BA, OCN, CCRC, Martin Memorial Health Systems Daniel Peter Matulich, CCRC, Columbia University Ruby Maynes, CCRC, Western Sky Medical Research Heidi McAllister, CCRC, Dartmouth College Patricia L McCollum, CCRC, Legacy Clinical Research Kellene P McDermott, BS, CCRC, St Luke’s Intermountain Research Center Margaret McDonald, CCRC, SleepMed Andrea G McDougal, CCRC, Emory University Jennifer Louise McGrath, CCRC, Royal Adelaide Hospital Melissa M McGraw, RN MSN, CCRC, Carolinas Medical Center Sheri McIlvain, MA, RNC, CCRC, Kootenai Medical Center Nancy Ann McKay, BSN, RN, CCRC, Children’s Hospital Colorado Carrie A McKenzie, RRT, CCRC, Cities Research Center Anna Mehlhoff, CCRC Ann Marie Mehringer, MS, CCRC, University of Michigan Melissa Gene Melton, CCRC, Alamo Medical Research Tracy L Mente, BSN, RNC-OB, CCRC, Wheaton Franciscan Healthcare Kian Merchant-Borna, MPH, CCRC, University of Rochester Medical Center Charlene Metz, CCRC Margaret Louise Milazzo, CCRC, Respiratory Clinical Trials Unit Lindsey Miller, CCRC, Allergy and Asthma Research Center Nirvi Mistry, CCRC, Stanford University Cheryl Mize, RN, CCRC Brent Moen, CCRC Ana L Moreno, BA, CCRC, University of California, San Francisco Rhonda A Morin, RN, CCRC, Quest Research Institute Catherine Lee Morningstar, BS, CCRC, Cancer Care Associates of Fresno Medical Group Janice P Morris, CCRC, Gaffney Pharmaceutical Research Suzanne Christine Morton-Kuker, MA, CCRC, Medical University of South Carolina Katherine Mungari, CCRC Sally A Murray, CCRC, The Oregon Clinic, West Hills Gastroenterology

x

74    Monitor august 2012

LaDonna Muscatell, RN, CCRC, Wenatchee Valley Medical Center Cynthia Scully Nadherny, RN, CCRC Katherine Elizabeth Nega, BA, CCRC Alexis Neill, RN, CCRC Elise Nicole Nelson, LPN, CCRC, Psoriasis Treatment Center of Central New Jersey Edyta Elzbieta Niebrzegowska, RD, CCRC, Bart’s and the London NHS Trust Massoud Nikkhoy, CCRC Laurie Karoline Noreika, CCRC Harry Nyanteh, CCRC, Omega Research Consultants Owino Emmanuel Ochieng, RN, CCRC, The Walter Reed Project Kristin Ann Oimoen, BS, CCRC, Medical College of Wisconsin Omotayo S Olapo, CCRC, Aurora St Luke’s Medical Center George F Omondi Okoth, CCRC, The Walter Reed Project Michelle Orlick, RN, CCRC Kaloian Ouzounov, MPH, MS, DPM, CCRC, Salvus Research Program Marcella Marie Oyer, MLT (ASCP), CCRC, Associates in Internal Medicine Beth Anne Panella, MA, CCRC, Artemis Institute for Clinical Research Deepti Patki, MS, CCRC, University of North Texas Health Science Center Annette Paulsen, CCRC Amanda Pecarskie, CCRC, Ottawa Hospital Research Institute Gisela Peterson, CCRC, Profil Institute for Clinical Research Joanna Peterson, RN, CCRC, Medstar Health Research Institute Leslie Carole Pettiford, RN, CCRC, University of Florida Shands Cancer Center Justin R Phillips, MICT, CCRC, Heartland Research Associates Cassandra Marie Pitts, CCRC, University of Kansas Cancer Center Joseph Pollard, MPH, CCRC Karen Postema, AA, CCRC, Spectrum Health Sheila Renay Powell, RN, CCRC, Baylor University Medical Center Phyllis Michelle Prien, BS, CCRC, Covance Allison Renee Prisby, CCRC, American Health Research Susan Connor Proe, MS Ed, CCRC Jessica Prutzman, CCRC Amber Purkeypile, BS, CCRC, Loma Linda Veteran Association Health Care System Natalie Ruth Quam, BA, CCRC, Trinity Health Chandra M Ramos, DC, CCRC, Sarkis Clinical Trials Rebecca Marion Rankin Wagenaar, MS, CCRC, Virginia Mason Medical Center Susan Clark Rath, PA-C, CCRC Kristina Rau, CCRC, Ochsner Clinic Foundation Justine Y Rees, CCRC, Hope Heart Institute Mariam Reganyan, BA, CCRC, PAREXEL Susan Carol Reiling, CCRC, Founders Research Nicole Reither, LPN, CCRC, St Vincent’s Medical Center Mariamne Reyna, CCRC, Lenox Hill Hospital Trisha R Riffle, BS, CCRC, Coastal Clinical Research

Christine Almorfe Rivera, CCRC, Advanced Clinical Research Institute Victoria Rodriguez, CCRC Birgit Roller, MA, CCRC, University of Michigan Anna Marie Romo, CCRC, Sun Research Institute Jodette E Rose, CCRC, VA Nebraska Western-Iowa Health Care System Margaret Ross, BA, CCRC, University of Washington, Seattle Cancer Care Alliance Alexandra F Rowden, BA, CCRC, Advent Clinical Research Mika Sakoda, CCRC Miguel Candelario Salazar, BS, CCRP, CCRC, Advanced Rheumatology Rick L Sambucini, RN, CCRC, University of Texas Health Science Center at San Antonio Cheryl Sanders, CCRC Tirath Sanghera, CCRC Michelle Lynn SanPedro, RN, CCRC, Yale School of Medicine Alana Sarah, CCRC, Barwon Health Sandra L Sawyer, CCRC, Orlando Health Monica Sberna, CCRC Stephanie Michelle Schumann, RN, CCRC, Florida Hospital Cancer Institute Connie Scott, RN, BSN, CCRC Janet Dozier Shannon, CCRC Carrie Y Shetley, CCRC, CU Pharmaceutical Research Tara Roark Shetley, CCRC, CU Pharmaceutical Research Susan R Shidel, BS, CCRC, Allergy and Clinical Immunology Associates Hyunjung Shin, RN, CCRC, Samsung Medical Center Clinical Trial Center Jennifer Shue, CCRC Julia Kay Siik, RN, BSN, CCRC Margaret Simonetta, RN, CCRC Narina Simonian, CCRC, Northwestern University Clinical and Translation Studies Institute Suman Singh, CCRC, Medstar Health Research Institute Renee Griselle Smith, CCRC, Emory University Toni L Smith, RN, CCRC Abigail Ensign Snow, RN, BA, BSN, CCRC, Florida Hospital-Pepin Heart Institute Sandra Jean Sparr, RN, CCRC, Heartland Research Associates Heidi Renee Sprouse, CCRC, Blair Medical Associates Kathleen C Steel, RN, CCRC, Diagnostics Research Group Nancy M Stellato, RN, MS, CCRC, North Shore Long Island Jewish Health System Mary Elizabeth Stoker, RN, CCRC Jessica Leigh Sturges, EMT-P, CCRC, Renstar Medical Research Jessica Ann Sullivan, CCRC, Benchmark Research Kristin Surdam, CCRC, Dent Neurologic Institute Jamie M Swanlund, BA, CCRC, Department of Veterans Affairs Nia Joy Swinton-Jenkins, CCRC, Hawaii Pacific Health Mary Tabacchi, CCRC, Washington University Eileen A Taff, MSN, RN, NE-BC, CCRC, St Luke’s Hospital and Health Network Michele Francis Tavish, LVN, CCRC, Wilford Hall Ambulatory Surgical Center

Kris Taylor, CCRC, Las Vegas Physicians Research Group Bridgett Nicara Thompson, CMA, CCRC, Omega Research Consultants Natalie Evette Thurman, BS, CCRC, Allegheny Singer Research Institute Lori Tomassian, CCRC Andrea Torres, CCRC, University of North Texas Health Pediatric Research Payal A Trivedi, MBBS, MS, CCRC, Baylor Research Institute Nicole S Tucker, CCRC, Advanced Clinical Research Melissa Marie Twine, CCRC, Prestige Clinical Research Carey E Uhlenkott, MS, CCRC, St Luke’s Intermountain Research Center Monica E Unger, BA, CCRC Caoimhe Vallely-Gilroy, CCRC, Novartis Pharma AG Cathy Van Every, RN, BSN, CCRC, University of Pittsburgh Medical Center Sarah E Van Meter, CCRC, Four Rivers Clinical Research Katherine Vandris, BA, CCRC, NYU Langone Medical Center Caroline F Vemulapalli, MS, CCRC, Washington University School of Medicine Brandy Venable, RN, CCRC, Wellmont CVA Heart Institute Rebecca Vest, BS, CPhT, RMHC, CCRC, Seattle Children’s Hospital, Research and Foundation Kathryn Vetro, CCRC, Unilever Shari L Vincent, LPN, CCRC, Wake Research Associates Suzanne Vogt, CCRC, Benaroya Research Institute Teresa V Walters, CCRC, Indiana University Health Arnett Jon Ward, CCRC, Aspen Clinical Research Anna P Warren, CCRC Valerie Watson, CCRC, Danbury Hospital Matthew Ryan Weaver, CCRC, Cancer Treatment Centers of America Christina Laela Wegerski, CCRC, Albuquerque Clinical Trials Melody Ann Werne, CCRC, MediSphere Medical Research Center Erica S Westphal, CCRC, Dent Neurologic Institute Ashley Renee Widener, BA, CCRC, Wake Forest School of Medicine Mary E Williams, RN, CCRC, John B Amos Cancer Center Mindy G Winburn, CCRC, Gastroenterology Research of New Orleans April Wolber, CCRC, Accelovance Adam Michael Wong, CCRC, Alamo Medical Research Ruhua Yang, BS, CCRC, Yale University, School of Medicine Yuhua Yang, CCRC, Alpha Medical Research Anu Yohannan, MA, CCRC Randall A Young, MA, LMHC, LCDC III, MAC, CCRC, Goldpoint Clinical Research John Yue, CCRC, University of California, San Francisco Genelou Fuentes Yumping, CCRC Thais Zayas-Bazan, CCRC, Forbes Norris MDA/ALS Research Center Michelle Renee Zulick, RN, CCRC, Sentara Norfolk General Hospital

FXM Research Corp. Hector Wiltz, M.D., CPI (305) 220-5222 Office (305) 675-3152 Fax

FXM Research Miramar Francisco Flores, M.D. (954) 430-1097 Office (305) 675-3152 Fax

FXM Research international - Belize, Central America Julitta Bradley, M.D. & Ines Mendez-Moguel, M.D. (305) 220-5222 Office (305) 675-3152 Fax

The following contact information is provided as a member service and cannot be used for solicitation or commercial purposes of any form.

ChApter AffiliAtes

Connecting ACRP Members Globally G L O B A L C h Ap ter s AUstrALIA

GerMANY

Robyn Lichter Nucleus Network Limited Tel: 613 9076 8909 Fax: 613 9076 8940 [email protected]

Heike Schön [email protected]

BeLGIUM

Dr. Satish Chandra-Nair, MS, PhD, MIS Tel: 971 3 7072452 [email protected]

Yves Geysels N.V. Novartis Pharma S.A. Tel: +32 2 246 1669 [email protected]

CANADA Patricia Jones, ART, RAC Quality & Compliance Consulting Tel: (905) 388-6943 [email protected]

eAst AFrICA Bernhard R. Ogutu, MMed, PhD Walter Reed Project Tel: +254 (0) 733-812-613 [email protected]

GULF COOperAtION COUNCIL

INDIA

JApAN

tAIWAN

Hideo Kusuoka, MD, PhD Osaka National Hospital Tel: 81 6 6942 1331 [email protected]

Pei-Yu Chang Foundation of Medical Professionals Alliance in Taiwan Tel: (02)2321-2362 20 [email protected]

serBIA Aleksandra Pesic PSR Serbia Tel: 3811 1337 3760 Fax: 3811 1337 3721 [email protected]

the NetherLANDs Cecilia Huisman Penthecilia B.V. Tel: +31 65 139 6305 [email protected]

Dr. Ravi Ghooi Bilcare Research Academy Tel: +912066226363 [email protected]

sOUth AFrICA

UNIteD KINGDOM

[email protected]

Peter Motteram P.A.S.M. Limited [email protected]

IsrAeL Please go to the Chapter Website for more information: www.acrpnet.org/GetInfoFor /InternationalChapters/Israel.aspx

Fernando Martinez Bermejo, PhD, MBA inVentiv Clinical Tel: 34915529719 [email protected]

CALIFOrNIA

COLOrADO

FLOrIDA

Laurie Burnside, MSM, CCRC® The Children’s Hospital Tel: (720) 777-4655 [email protected]

Michael Leon, RN, CCRC® Florida Premier Research Institute Tel: (407) 740-8078, ext 239 [email protected]

spAIN

U .s . C h A pter s ALABAMA

Central Alabama

Alice A. Howell, RN, BSN, CCRC® University of Alabama at Birmingham Tel: (205) 975-8592 [email protected]

ArIZONA phoenix

Laura Wilkes, CCRC® Banner Health Tel: (602) 839-5776 [email protected]

Northern California (Bay Area)

Bonnie Miller, RN, MS Bonnie Miller Clinical Research Consulting Tel: (650) 678-9477 [email protected]

southern California

Gregory Johnson American University of Health Sciences Tel: (562) 988-2278 [email protected]

Greater san Diego

Terence Lloyd Webb, PharmD, MBA, CRCP MedVenture Consultants, Inc. Tel: (619) 922-9328 [email protected]

76



Monitor August 2012

Front range (Denver)

CONNeCtICUt See MASSACHUSETTS

DIstrICt OF COLUMBIA See MARYLAND

Central Florida

Northeast Florida Mary Lord Quintiles Tel: (904) 940-9379 [email protected]

southeast Florida

Sheri Angele Alleyne, MA, CCRC® Tel: (954) 270-8930 [email protected]

suncoast (tampa/Clearwater)

Yvonne R. Gorham Moffitt Cancer Center and Research Institute [email protected]

GEORGIA

MISSOURI

Carole Ehleben, EdD Cear Tel: (770) 449-4424 [email protected]

Carrie Leigh Catanzaro, RN, BSN Tel: (314) 362-5705 [email protected]

Atlanta Area

South Georgia

Steven Ziemba Phoebe Putney Memorial Hospital Tel: (229) 312-0284 [email protected]

INDIANA

Circle City (Indianapolis)

Melissa S. Mau, BS, MS, CCRA® Indiana University School of Dentistry Tel: (317) 201-8507 [email protected]

KANSAS

Greater Kansas City

Christina R. Eberhart Ernst & Young, LLP Tel: (510) 390-1182 [email protected]

LOUISIANA

Southeast Louisiana

Maria Latsis, BS, CCRC® Ochsner Clinic Foundation Tel: (504) 606-7116 [email protected]

MAINE See MASSACHUSETTS

MARYLAND

Baltimore/Washington

Cathy L. Garey, RN, BSN, CCRC® George Washington University Medical Faculty Associates Tel: (202) 741-3168 [email protected]

MASSACHUSETTS New England

Susan M. Flint, MS, RAC, CCRA®, CCRP Navidea Biopharmaceuticals Tel: (617) 513-3787 [email protected]

MICHIGAN

Southeastern Michigan Sindhu Halubai, MS University of Michigan Tel: (734) 239-4739 [email protected]

MINNESOTA Denise C. Windenburg, BA, CCRC® Lillehei Clinical Research Unit/ University of Minnesota [email protected]

Greater Missouri

NORTH DAKOTA

Northern Midwest Mountains to Plains

NEBRASKA

Brittany Brown Trial Runners, LLC Tel: (701) 483-3599, ext 105 [email protected]

Holly K. DeSpiegelaere, RN, CCRC® Dept. of Veterans Affairs Medical Center Tel: (402) 995-4171 [email protected]

Kimberly S. Wold, MSPH, CCRC® Sanford Research/USD Tel: (701) 234-5890 [email protected]

Great Plains

NEW HAMPSHIRE See MASSACHUSETTS

NEW JERSEY Cliff Miras, BS, BA Cornerstone SG, LLC Tel: (973) 656-0220 [email protected]

NEW MEXICO Sheri Romero, LRV, CCRC® NM Clinical Research & Osteoporosis Center, Inc. Tel: (505) 855-5525 [email protected]

NEW YORK

Central New York Sarah Vander Voort GlaxoSmithKline Tel: (315) 637-4588 [email protected]

New York Metropolitan

Janet E. Holwell, CCRC®, CCRA® Pfizer, Inc. Tel: (718) 263-4160 [email protected]

Western New York

Red River Valley

OHIO

Greater Columbus

Paula Smailes, RN, MSN, CCRP, CCRC® Tel: (614) 293-3644 [email protected]

Northeastern Ohio (Cleveland/Akron)

Sonya Mihalus, RN, BSN, CCRC® University Hospitals/Case Medical Center Tel: (216) 286-0757 [email protected]

OKLAHOMA Tulsa

Kathy Buchanan, RN, BSN, CCRC® University of Oklahoma-Tulsa Tel: (918) 744-2453 [email protected]

Deirdre Smith, RN, CCRC® Texas Health Institute Tel: (832) 355-9801 [email protected]

Greater San Antonio Area

Holly Reade Nolan, MS University of Texas Health Science Center San Antonio Tel: (210) 567-0481 [email protected]

North Texas

Chrystin Pleasants, BS, CCRC®, CCRA® Chrystin Pleasants Research LLC Tel: (214) 826-5752 [email protected]

UTAH

Greater Salt Lake City

Laurie Lesher, MBA, BSN University of Utah Tel: (801) 581-4128 [email protected]

VERMONT

Central Virginia

PENNSYLVANIA

Greater Philadelphia

Barbara J. Early, RN, CCRC® University of Pittsburgh Medical Center Tel: (412) 647-9745 [email protected]

Jill Suzanne Moody Molek, BS, MA, CCRA® Tel: (919) 792-0355 [email protected]

Greater Houston Area

Lindsay Severson, CCRC® Tel: (503) 494-2316 [email protected]

Portland

NORTH CAROLINA

Research Triangle Park

Ann Rutledge, MA, CCRC® Benchmark Research Tel: (512) 478-5416 [email protected]

See MASSACHUSETTS

Jeffrey James Collins Monitorforhire.com Tel: (610) 862-0909, ext 109 [email protected]

Gale Wyatt Groseclose, CCRC® Carolinas Medical Center Tel: (704) 355-4875 [email protected]

Central Texas

OREGON

Kristine Lynn Kuryla University of Rochester Tel: (585) 350-2671 [email protected]

Greater Charlotte

TEXAS

Greater Pittsburgh

VIRGINIA

Susan Rockwell, MEd Aptiv Solutions Tel: (434) 295-4451 [email protected]

WASHINGTON

Pacific Northwest (Seattle) Cindy Mendenhall Valley Medical Center Tel: (425) 327-3787 [email protected]

WEST VIRGINIA

See MASSACHUSETTS

Connie Cerullo, MS, CCRC® United BioSource Tel: (304) 366-7700 [email protected]

TENNESSEE

WISCONSIN

RHODE ISLAND

Greater Nashville

Suzanne M. Kincaid, CCRA® Sarah Cannon Research Institute Tel: (615) 329-7618 [email protected]

Mid-South (Memphis)

Southern Wisconsin (Madison/Milwaukee)

Christine J. Birchbauer, CCRC® Arthritis Clinic Tel: (414) 476-2423 [email protected]

Sandra N. Dodd Family Cancer Center Foundation, Inc. Tel: (901) 685-5655 [email protected]

chapters

❘ 77

A s s o c i a t i o n

N e w s

Chapter Notes

Belgium The ACRP Belgian Chapter’s successful June meeting focused on “Informed Consent: How Much Information is needed?”, a presentation by Dr. Ingrid Klingmann, chair of the board of the European Forum for Good Clinical Practice. Dr. Klingmann was honored this year at the ACRP Global Conference in Houston with the William C. Waggoner Award for her outstanding contributions to the clinical research community. The theme of informed consent remains a sensitive one in Europe. The central question is the following: For whom is all the work of informed consent being done? Is the consent form protecting the patient or is it protecting the sponsor and investigator? Dr. Klingmann spoke about the other involved stakeholders, as well—the ethics committees and insurers, among them—and stated that this topic cannot be discussed and improved upon without all parties being involved in the process. Meanwhile, our chapter is putting the final touches on the agenda for our 15th Annual Conference, to be held on October 25 at the Royal Academy of Medicine of Belgium. Check our website at www.acrpnet.org/GetInfoFor/ InternationalChapters/Belgium.aspx for more information.

Central Texas So far, this year has been filled with activities, membership growth, and community support for our chapter.

www.acrpnet.org/chapters

We were able to hold a mixer in conjunction with the ACRP Global Conference in Houston, at which special guests Heike Schön, MSc, MBA, and David Vulcano, LCSW, MBA, CIP, RAC, spoke about the importance of getting involved in local chapters. This gave us the opportunity to network and meet with many other conference attendees to discuss our upcoming activities, and many who attended expressed a willingness to help our chapter, which is based in Austin. Several board members attended a presentation in May at the Austin Chamber of Commerce (Summer BioBash), where Dr. Steven Warach discussed plans to advance translational research in Central Texas. Dr. Warach emphasized that he was going to recruit the best researchers to the institute. This is a wonderful opportunity for our region. One of our chapter’s goals for this year is to reach out to the community and provide education in the research industry. The board has been in communication with Bruce Leander (former president of Ambion) about the possibility of mentoring students at the University of Texas, in addition to holding a panel discussion for the students about the clinical research industry. There is tremendous enthusiasm in Central Texas about a current initiative for creating a new medical school at the university. Plans for this year continue to emerge; so far we have speakers David Vulcano scheduled for June and Christine Pierre for November. Member-

ship within our chapter is strongly encouraged, and suggestions or ideas are always welcome. Visit our chapter website at www.acrpnet.org/GetInfoFor /USChapters/TX--Central-Texas.aspx for more information on meeting dates and other news or events.

Greater Charlotte (NC) Winter and spring have flown by for the Greater Charlotte Chapter. We will meet again on August 14 for a program presented by Dr. Lance Stell, medical ethicist for Carolinas HealthCare System and professor of philosophy and director of Medical Humanities at Davidson College. Dr. Stell will discuss research ethical dilemmas that he has encountered through the years. As always, dinner and time for networking will be provided. Two continuing education credits will be offered for this evening meeting. Meanwhile, our chapter hopes to have ACRP Trustee Liz Wool and many other exciting speakers at our Fall Conference on October 26, so please save the date. The event will be focused on “Emerging Trends in Clinical Research” and will be held at the University of North Carolina at Charlotte. We encourage anyone involved in research in the Greater Charlotte area to join us.

Greater Kansas City (KS) Launched in the spring of 2012, the Greater Kansas City Chapter of ACRP held its first networking event on April 25, and this gathering was enthusiastically welcomed by the region’s grow-

For complete, up-to-date information from ACRP chapters, see individual chapter websites at http://www.acrpnet.org/GetInfoFor/Chapters.aspx

x

78    Monitor august 2012

ing clinical research community. We would like to thank the event sponsor, PRA International. At press time, we had an educational event scheduled for June at the Kansas Bioscience Authority in Olathe, with Dr. Gregory Kearns, chairman of the Department of Medical Research at Children’s Mercy Hospital, presenting on pediatric translational medicine. Additional educational and networking events are scheduled for this fall, so visit www.acrpnet.org /GetInfoFor/USChapters/GreaterKansas City.aspx for the latest details.

Minnesota The Minnesota Chapter presented a program in May on “How Well Do You Know Informed Consent—Take the Test!” This event featured a panel of speakers whose members provided perspectives from the realms of being a research coordinator at a site, an institutional review board manager, and a sponsor representative. The panelists’ presentations were followed by a lively Q&A session. More than 60 people attended and could earn 3.0 continuing education credits from the program. We are planning a networking event this summer to provide an opportunity for our members to get together in an informal setting with their colleagues, as well as to invite a guest and introduce new members to the chapter. Meanwhile, our Programming Committee has been busy planning a twoday program for early November in the form of a “Clinical Trial Boot Camp.” This program will cover the basics of how a research protocol moves from initiation through Food and Drug Administration audit; attendees will be eligible to receive 12.0 continuing education credits.

New York Metropolitan Following several well-received programs held earlier this year, our energetic board has kept up the momentum for our upcoming events. We warmly invite you to: ●● Our

4th Annual Clinical Research Symposium, which will be held on September 14 at Pfizer Headquarters in New York City. This year’s topic is “Career Development in Clinical Research— Opportunities, Tools, and How to Use This Knowledge to Avoid Noncompliance.” An array of speakers will present on different career paths in clinical research, the tools to get there, and how to ensure that you are compliant with regulations surrounding role responsibilities and training. ●● Our November 1 education event on Long Island will be hosted at the Feinstein Institute for Medical Research, North Shore-Long Island Jewish Health System. The focus will be on “Building Quality Assurance and Quality Management Systems into Clinical Trials.” A free shuttle will be provided from the train, courtesy of North Shore Hospital. ●● Our December 6 Annual Holiday Networking event will be at Saks Fifth Avenue, overlooking the Rockefeller Center Tree. Don’t miss this one! For more information or to register online for any of these upcoming events, visit our chapter website at www .acrpnet.org/GetInfoFor/USChapters /NYNewYorkMetropolitan/Upcoming ChapterEvents.aspx. We welcome topic suggestions from our chapter members for future educational sessions and social events. Please

send your suggestions, questions, or comments to NYMetroACRP@gmail .com.

Pacific Northwest (WA) The Pacific Northwest Chapter is asking you to save the date of September 21, when we will be hosting an all-day educational event offering 4.0 continuing education credits, networking, door prizes, and fun! As of press time, the location and exact time have not been set, but for more information or to help sponsor this event, visit www.acrpnet .org/GetInfoFor/USChapters/Pacific Northwest.aspx or contact Cindy Mendenhall at [email protected].

Southern Wisconsin The Southern Wisconsin Chapter has provided three educational opportunities that offered a total of 4.0 continuing education credits so far in 2012. Our first two events were reported in the June issue of The Monitor. Our third event was a simultaneous webcast of the Society of Clinical Research Associates’ Wisconsin Chapter Meeting from Marshfield, Wis., on “Local IRB vs. Deferred NCI Central IRB: The IRB Perspective and the Regulatory Specialist Perspective.” We expect to hold three or four more educational events by the year’s end. Short educational events providing a minimum of 2.0 total continuing education credits will take place between June and October, and our largest event will be an all-day Fall Symposium on November 9. The symposium will offer an opportunity to earn 6.0 credits from listening to five speakers on a variety of topics. Please see our chapter website at www.acrpnet .org/GetInfoFor/USChapters/Southern Wisconsin.aspx for more information about our upcoming events.

x

chapter notes    79

N e w s A P C R

2012 Board of Trustees & Organizational Listing

Officers President Michael J. Koren, MD, FACC, CPI Jacksonville Center for Clinical Research Jacksonville, FL

T R U S T EES Immediate Past President Jonathan Seltzer, MD, MBA, FACC Applied Clinical Intelligence, LLC Bala Cynwyd, PA

President-Elect Chris Allen, MD, FRCA, FFPM Merck & Co., Inc. Doylestown, PA

Robert A. Dracker, MD, MHA, MBA, CPI Summerwood Pediatrics Liverpool, NY

Joel S. Ross, MD, FACP, AGSF, CMD, CPI, LLC Memory Enhancement Center of America Eatontown, NJ

Robert Hardi, MD, AGAF, CPI Metropolitan Gastroenterology Group Chevy Chase Clinical Research Chevy Chase, MD

Grannum R. Sant, MB, BCh, BAO (Hons.), MA, MD, FRCS, FACS Genzyme Corp. Gloucester, ME

Anita Kablinger, MD, CPI Carilion Clinic—Virginia Tech Carilion School of Medicine Dept. of Psychiatry and Behavioral Medicine Roanoke, VA

Samuel Simha, MD, FACOG, CPI Research Memphis Associates Memphis, TN

A P CR R e p r e s e n tat i v e s

Nom i n at i n g Comm i tt e e

AMA Relations

Norbert Clemens, MD, PhD (Chair) CRS Mannheim GmbH Gruenstadt, Germany

Peter H. Rheinstein, MD, JD, MS, FAAFP, FCLM Severn Health Solutions [email protected]

Robert Leadbetter, MD (Vice Chair) GlaxoSmithKline Research Triangle Park, NC Gary Shangold, MD (ABoT Liaison) Convivotech, LLC InteguRX Therapeutics LLC Califon, NJ Charles M. Alexander, MD, FACP, FACE, CPI (Hon) Merck & Co., Inc. North Wales, PA

x

80    Monitor august 2012

Charles H. Pierce, MSc, MD, PhD, FCP, CPI Pierce One Consulting Cincinnati, OH Peter Stonier, MB ChB, PhD, FRCP, FRCPE, FFPM Faculty of Pharmaceutical Medicine Surrey, United Kingdom Greg Koski, MD, PhD, CPI (Hon) Massachusetts General Hospital, Harvard Medical School Boston, MA Jonathan Seltzer, MD, MBA, FACC Applied Clinical Intelligence, LLC Bala Cynwyd, PA

Michael J. Koren, MD, FACC, CPI

A P C R

APCR PREsident’s message

C o l u m n s

Subject Protections or Social Contract?

I The current system of “subject protections” cries out for an identity reassignment and relocation.

must admit that I’ve never loved the term “subject protections.” Don’t get me wrong. I stand proudly as a card-carrying proponent of medical research ethics with a resume that backs me up. My experience includes designing and conducting multiple investigator ethics training programs and extensive study of past wrongdoings committed in the name of science. Still, the term “subject protections” just doesn’t sit right with me. Perhaps this aversion originates from my childhood growing up in Staten Island, N.Y. I lived down the hill from the filming location of the movie, The Godfather, and, admittedly, mob activity was not unheard of in my neighborhood. Perhaps because of this upbringing, I can’t help but muddle the terms “subject protections” and “witness protection program.” This self-analysis seems cogent because every time I hear the term “subject protections,” my mind wanders off. Inevitably, I picture a faceless government agent escorting a research participant into a barren room for the purpose of de-identification—followed by the surreptitious movement of the subject’s records from state to state to “protect” the poor soul from the possible malevolent intentions of Dr. Don, the principal investigator. I’m kidding, of course. Nonetheless, I do believe that the current system of “subject protections” cries out for an identity reassignment and relocation.

How Did We Get Here? The current foundation of “subject protections” consists of institutional review board (IRB) approval and the written informed consent form. These requirements, initially articulated by the Declaration of Helsinki and the Belmont Report, have become mandated by law and serve to shelter research participants from the inclement activities that, all too often in the past, have exposed vulnerable populations to undue risk or outright harm. However, I would argue that these elements of subject protections provide cornerstones rather than foolproof enclosures. Structurally, IRBs and consent forms can extend only so far in truly minimizing subject risks. I don’t wish to trivialize the important role that IRBs play, but I must confess to chuckling when I think of a cartoon brought to my attention by a good friend, John Isidor, a pioneer and leader of an influential IRB for many years. The cartoon entitled, “How IRBs Got Started,” depicts a caveman getting pummelled by a large rock falling from a cliff in the first panel, followed by the second panel insight, “Ugh, maybe we need to do something to protect research subjects.” In the third panel, we see our friendly Neanderthal research subject covered by a lithe umbrella labeled “IRB” as the next huge boulder falls off the same cliff descending toward the hapless subject’s noggin.

x

APCR PRESIDENT’s MESSAGE    81

The wisdom of the cartoon lies in the idea that once we discover identifiable risks, someone needs to move the subject out of harms’ way. A head covering just won’t do. IRBs cannot realistically play that role. Only those of us with boots on the ground—primarily the principal investigator—can survey and assess risks as they unfold and then move subjects to a safer place. The informed consent form, the other cornerstone of contemporary subject protections, also suffers from limitations of design. Over my years of experience, consent forms have grown interminably longer and arguably less effective. Although their syntactic construction uses language understandable by a sixth grader, very few of us can grasp the totality of documents greater than 20 pages in length.

Though serving well from a risk management rationale, this contemporary approach to informed consent misses the mark. It seems more like sponsor protection than subject protection. Under the current research environment, we craft consent forms as legal documents. Then we ask patients to sign them with the unstated goal of full disclosure. This tactic makes perfect sense from a legal standpoint; disclosure transfers responsibility for the disclosed complications from the study team to the patient. Consequently, in the event of an untoward study outcome, a lawyer can tell the subject, “See, I told you so.” Unfortunately, though serving well from a risk management rationale, this contemporary approach to informed consent misses the mark. It seems more like sponsor protection than subject protection—at least from my prospective.

x

82    Monitor august 2012

When you study recent cases of research misconduct, such as the Roche case at Johns Hopkins or the Gelsinger case at the University of Pennsylvania, you find systematic failures that exposed subjects to unnecessary risks despite the approval of sophisticated IRBs and the execution of extended consent forms. In both these cases, and many other less highly publicized circumstances, problems arose because principal investigators became detached from their primary function of assessing and minimizing risks.

Next Steps I would argue that addressing this detachment, while encouraging investigators to focus on ongoing risk assessment, represents the most important area for ongoing improvement in subject protections. I make this statement for an obvious reason: The principal investigator is the most qualified person to assess risks for each subject prior to enrollment and then to monitor for new sources of risk as they emerge. Based on our clinical experience, ethical obligations, proximity to the subject, and knowledge of the research environment, only the principal investigator holds all the tools to get the job done. I find it terribly ironic that current law requires principal investigators to comply with all types of rules, but doesn’t directly address the thing that truly qualifies us to lead clinical investigation programs—our ability through training and skills to assess ongoing subject risk and balance those risks against the requirements of the research process and goals of each particular project. In this area, the investigator stands uniquely. Let’s face it. Subjects cannot fully advocate for themselves because, with rare exceptions, they have neither the knowledge nor experience to accurately assess risk. Neither can sponsors or IRBs balance risks on an individualized basis because, regardless of their resources and experience, they don’t know the patients. Further, other members of the study team, such as

the coordinators who provide invaluable assistance and usually administer consent forms, generally don’t receive training or fully understand the details of relative and absolute risk within a therapeutic area to pass judgment on these issues. So, if you agree with me thus far, I suspect that you will also share my view that the informed consent process needs major reforms. Let me suggest three possible changes: Number one, consent forms should never exceed three or four pages. These documents should succinctly explain the rationale for conducting the research and include a table of predictable risks with the anticipated frequency of these risks.

Subjects cannot fully advocate for themselves because, with rare exceptions, they have neither the knowledge nor experience to accurately assess risk. The study team should make the case for the risk estimates of these tables within the protocol, which an IRB, in turn, may accept or amend. Combining the study rationale, this risk table and a schedule of study visits and procedures in a document gives the patient almost all the information that he or she reasonably needs to assess, as a layperson, whether the study feels right for participation. Number two, during the consent process, the subjects should affirm on the first page that they have had the opportunity to ask questions and have decided to proceed with study participation under their own free will. This type of affirmative statement departs from the disclosure approach. In this model, patients accept responsibility for their decision, not because they’ve signed up after we told them everything that could possibly go wrong, but

because they have decided to pursue what they want. It’s a key distinction.

Although I intellectually understand the important distinction between research and practice, many ordinary folks just don’t care about this point. Many IRBs have added this language to the end of their consent forms, but I would move it to the beginning to frame the entire discussion. Let patients direct the process. Let them indicate to us what they want to know and what they don’t care about. Over many years of experience discussing research participation with patients, I’ve humbly learned that no one can predict, a priori, the most important issues to a given patient who is deciding about research. Sometimes the draw for patients is tangible, such as access to a specific treatment or reimbursement for time and travel. Other times a subject’s decision derives from the intangible feeling about wanting to participate in something that he or she deems important, rather than any specifics of the protocol. I’ve also encountered many circumstances in which a patient decides to participate in research believing that this commitment will result in more attention and a better outcome than that afforded to an “ordinary patient.” Although I intellectually understand the important distinction between research and practice, many ordinary folks just don’t care about this point. They enthusiastically cast their lot with a physician, coordinator, or research team. These patients trust that the team will exercise its best judgment on their behalf, and they perceive benefit from their commitment. Further, these same folks don’t struggle, unlike overly idealistic ethicists, with the idea that investigators get paid for research. In fact,

they hold the opposite view. Professionals who get paid for something take it seriously, and subsequently provide better outcomes for their “charges.” Number three, I believe that both the patient and investigator should sign a “social contract.” This idea derives from the implications of clinical research results on society. When someone agrees to participate in a clinical trial, the outcome of the relationship affects many people beyond just the subject and principal investigator. With each study, we add to collective wisdom. Because the approval of new medical products and standards of care hinges on clinical investigation, misconduct of the principal investigator— or the subject, for that matter—can adversely affect many other people. Both parties should acknowledge this profound dynamic during every consent process, in my view.

upon the publication of both positive and negative results, and to apply lessons learned to individual subjects. Currently, consent forms do not require an affirmative statement from investigators, but I would argue that a onesentence statement by an investigator would provide more “protection” than 10 pages of disclosures. What about legal liability within the social contract? Well, I’m not an attorney, but I have full confidence that some clever legal detective will comb the vastness of the rules and regulations to strike the right language that defines and limits exposure as long as we carry out our jobs of risk assessment.

This advocacy role should involve a thoughtful balancing

Implications of the Contract

of the complex overlap

For subjects, the social contract requires using the study product as prescribed, making an effort to continue in the study unless circumstances change in an unforeseen way, and reporting relevant information to the study team. Though the subject should never experience coercion to remain active in a study, he or she should acknowledge an understanding that dropping out of a research study prematurely can undermine the study team’s ability to deploy the scientific method to help or protect the public. Investigators, in turn, should affirm that they will use their knowledge and serve as the subject’s personal advocate. This advocacy role should involve a thoughtful balancing of the complex overlap of individual, subject, and protocol interests that only a principal investigator can execute. Sometimes this advocacy requires us to withdraw patients and, at other times, to make the case to a sponsor to prevent a subject from getting dropped from a study. As investigators, the social contract would also require us to be compulsive about carrying out our ICH/FDA Good Clinical Practice obligations, to insist

of individual, subject, and protocol interests that only a principal investigator can execute. The fact is that folks tend to enjoy better outcomes than predicted by their medical circumstances when they participate in clinical trials. This phenomenon occurs for a variety of reasons, including closer follow-up and the magic of the Hawthorne Effect. So I believe we can make the case that both benefits and limitations of liability should fall within the confines of the social contract. In the end, embracing a model bound within a social contract should lead to better outcomes for all of the participants in research, including subjects, sponsors, investigators, and staff. By departing from our past focus of defensiveness in favor of beneficence, we give new meaning to a scene in which the boss puts his arm around a subject and mumbles like an avuncular Marlon Brando, “You’ll be glad to know I have a contract out for you.” 

x

APCR PRESIDENT’s MESSAGE    83

A P C R

C O L U M N S

PI Corner

Joel S. Ross, MD, FACP, AGSF, CMD, CPI, LLC

Finding Studies and Recruiting Patients Is There Synergy?

W

hich problem would the owner of a clinical research center rather have: ●● Plenty

of studies, but few patients OR ●● Plenty of patients, but few studies? We’ll take it as a given that nobody wants to have both very few patients for studies and too few studies for them to consider. So let us look at the intertwined challenges of finding studies and finding patients who are interested and eligible for those studies.

Casting a Net and Standing Out in a Crowd Where do you find studies? Many research coordinators whom I have met at investigator meetings over the past 30 years frequently tell me they are responsible for finding sponsors or contract research organizations (CROs) to “land a study.” This is not an easy process these days. With more and more physicians looking for alternative or supplemental income to counterbalance their ever diminishing reimbursement from insurance payments, research is becoming increasingly attractive. What these newcomers to research quickly find is that it “ain’t so easy.” I would estimate that, for every study that needs 50 sites, more than 500 potential ones are first located by either a CRO or the sponsor/CRO combination. Then, to be sure of making the cut, a site has to

x

84    Monitor August 2012

be in the “top 10%” of all the sites being ranked for suitability for the study. Perhaps half of all the business goes routinely to these same “percenters” who are almost always on that same list of sites (often academic sites and several nonacademic high-enrollers with often very good quality data).

Recruiting and retaining the right patients can be just as hard for sites as landing studies in the first place. Here are some tips on how to rise to meet both challenges. So how can a site that has not yet risen to the lofty heights of the top 10% maintain its flow of studies? Let’s look at some options: ●● Option

1. Attend as many conferences as possible related to your field of interest. In my field, which is memory loss, we search for studies that cover a gamut of conditions, including prevention of mild cognitive impairment (MCI), prevention of Alzheimer’s disease (AD) or dementia, and treatment studies for MCI, AD, or dementia. One can attend a meeting per week to mingle with CROs, sponsors,

and lead scientists doing Phase I work, or even with molecular biologists studying new pathways leading to disease entities. So how does one decide what meetings to attend? My best advice is that you concentrate on meetings that have resulted historically in quality networking and landing a quality study for you or your peers. Better yet, if you land a study, enroll the contracted number of high-quality subjects, and collect excellent data, you can be sure of returned business when the word spreads that you do a good job. Doing a bad job even one time gets you on the dreaded “black list,” even without a Form 482 from the Food and Drug Administration. ●● Option 2: Use a broker to represent your site. This refers to working with an individual or corporation that does the “shopping” for you, if you have little time as a principal investigator (PI) to fly all over the world seeking studies. They usually will charge a fair commission, often 6% of the study budget on a per-patient randomized basis, excluding pass-through costs. ●● Option 3. Affiliate with an existing high-enrolling, high-quality site (qualities not always to be found in one and the same site, as I have discussed in earlier columns). Link to their contacts and introduce your site as willing to

do all it takes to be the best in the business. This option is my least favorite.

Moving Right Along and Sifting for Gold Now let’s look at how you can maintain a steady flow of patients. Arguably, recruiting and retaining the right patients can be just as challenging as finding studies. It might take some time to find a study for which you can enroll the targeted number of subjects, keep to the low dropout rate you agreed to, and sign off as PI on the “data lock” in a timely fashion (something I am doing at this very moment while preparing this entry). However, once you have the study, the true challenge is to find the “right patients, for the right study, at the right time of their illness,” and to keep them in the study for the right amount of time (dictated of course by the protocol). At any given time I have, on average, seven-to-nine actively enrolling studies solely in the field of MCI, AD,

Closing Advice

or dementia. I prescreen in my office approximately 10 patients in order to find each one that initially seems to fit all the prescreen inclusion/exclusion criteria for a particular study. Thus, it is a 10% prescreen success rate. Then, of those patients whom I feel qualify, only half are randomized after pre-study dropouts occur from such factors as unsuspected MRI findings, lastminute changes of heart from patients or their families, ECG abnormalities being noted, unexpected concurrent illnesses being experienced prior to randomization, etc. In the end, it is more like a 5% yield from all the patients I see at my center actually reaching the point of randomization to a study. These subjects are recruited through a very ambitious set of institutional review board–approved advertisements and lectures at senior centers, retirement communities, houses of worship, AARP chapter meetings, Salvation Army facilities, and assisted living/nursing homes within a 50-mile radius of my three centers.

One must be patient to find the best patients for a study. The worst thing a site can do is enroll a large number patients and have a high dropout rate due to poor communication of the exact study requirements, such as keeping appointments, being very compliant, reporting any and all adverse events and changes in medications, etc. In summary, it is best to have a small number of studies with high enrollments, rather than many studies with low numbers of enrolled subjects. 

Joel S. Ross, MD, FACP, AGSF, CMD, CPI, LLC, is founder, chairman, and president of the Memory Enhancement Center of America, Inc, a Phase I, II, and III Alzheimer’s disease evaluation and treatment center, and the medical director of Iberica USA’s Phase I research center, both located in Eatontown, N.J. He received his geriatric fellowship training at Mt. Sinai Medical Center in New York City, where he holds a clinical adjunct professorship in the Department of Geriatrics. He also was the first board-certified, fellowship-trained geriatrician in the state of New Jersey. He can be reached at [email protected].

The APCR Awards

APPLICATIONS OPEN AUGUST 1, 2012

Recognizing Outstanding Physician Leadership Nominate yourself, your team, or someone you know for a 2013 APCR Award and help recognize integrity and excellence in the clinical research profession. In addition to worldwide recognition by peers and press, APCR Award winners will receive a year of free APCR Membership and complimentary registration to the ACRP 2013 Global Conference in Orlando, Florida. Winners are selected through a peer-review process, conducted by a committee of independent physicians in clinical research. Award Categories: The Lifetime Achievement Award Outstanding Physician Leadership in the Profession Special Recognition Award

www.apcrnet.org/awards

x

PI CORNER    85

Home

stu d y

te s t Earn 3.0 Continuing Education Credits

Performance Metrics in Clinical Trials In this issue of the ACRP Monitor, three articles have been selected as the basis for a Home Study test that contains 30 questions. For your convenience, the articles and questions are provided in print as well as online (members only) in the form of a PDF (requires Adobe Reader and text file). This activity is anticipated to take three hours. Answers must be submitted using the electronic answer form online (members only, $32). Those who answer 70% of the questions correctly will receive an electronic statement of credit by e-mail within 24 hours. Those who do not pass can retake the test for no additional fee. Hardware/Software Requirements: Home Study tests require version 4.x browsers or higher from Internet Explorer, Mozilla Firefox, or Safari. A browser that can run Adobe Flash 9.0 is required to view the digital edition of The Monitor, and Adobe Acrobat is required to view PDFs of the Home Study test. The August 2012 Monitor Home Study is based on the following three articles in this issue: 1. Clinical Metrics 102: Best Practices for the Visualization of Clinical Performance Metrics Paul Hake, BEng, ACA, Executive for Global Healthcare and Life Sciences, IBM Business Analytics 2. What Gets Measured Gets Fixed: Using Metrics to Make Continuous Progress in R&D Efficiency David S. Zuckerman, MS, President, Customized Improvement Strategies, LLC 3. Metrics in Medical Imaging: Changing the Picture Hui Jing Yu, PhD, Medical Affairs Scientist; Colin G. Miller, PhD, Senior Vice President for Medical Affairs; and Dawn Flitcraft, Senior Vice President for Client Services—all at BioClinica

H o me S t ud y L ear n i n g Ob j ec t ives After reading these articles, participants should be able to: 1. understand the value of performance metrics and choose the best chart types for common analytical objectives. 2. identify the various types of metrics required to manage and improve performance in their organization and perhaps develop some of their own metrics. 3. describe how an imaging core lab partners with sponsors to use metrics to ensure the collection of quality imaging endpoint data for clinical research studies.

This test expires on August 31, 2013 (original release date: 8/01/2012)



86    Monitor august 2012

C o n t i n ui n g E ducat i o n I n f o rmat i o n The Association of Clinical Research Professionals (ACRP) is an approved provider of medical, nursing, and clinical research continuing education credits. Contact Hours The Association of Clinical Research Professionals (ACRP) provides 3.0 contact hours for the completion of this educational activity. These contact hours can be used to meet the certifications maintenance requirement. (ACRP-2012-HMS-008) Continuing Nursing Education The California Board of Registered Nursing (Provider Number 11147) approves the Association of Clinical Research Professionals (ACRP) as a provider of continuing nursing education. This activity provides 3.0 nursing education credits. (Program Number 111472012-HMS-008) Continuing Medical Education The Association of Clinical Research Professionals (ACRP) is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The Association of Clinical Research Professionals designates this enduring material for a maximum of 3.0 AMA PRA Category 1 Credits™. Each physician should claim only the credit commensurate with the extent of their participation in the activity.

A C R P D i s c l o s u r e S tat e m e n t As an organization accredited by the Accreditation Council for Continuing Medical Education (ACCME®), the Association of Clinical Research Professionals (ACRP) requires everyone who is in a position to control the planning of content of an education activity to disclose all relevant financial relationships with any commercial interest. Financial relationships in any amount, occurring within the past 12 months of the activity, including financial relationships of a spouse or life partner, that could create a conflict of interest are requested for disclosure. The ACCME defines a financial relationship as that in which the individual benefits by receiving a salary, royalty, intellectual property rights, consulting fee, honoraria, ownership interest (e.g., stocks, stock options, or other ownership interest, excluding diversified mutual funds), or other financial benefit from a commercial interest. Financial benefits are usually associated with roles such as employment, consultancy, management position, teaching membership on advisory committees or review panels, board membership, speaker’s bureaus, and other activities from which remuneration is received, or expected. A “commercial interest” is defined as any entity producing, marketing, reselling, or distributing healthcare goods or services consumed by, or used on, patients. A company that provides a direct service to patients is not considered a commercial interest. ACRP members are often employed by commercial interests. However, if the content of the educational activity and/or committee responsibility deals with the regulatory and ethical requirements of the conduct of clinical research, it is not considered a conflict of interest. It is also not considered a conflict of interest if the content is not about or promoting products or services of commercial interests. If an individual has a relationship that is deemed a conflict of interest, the conflict must be resolved prior to the activity. The intent of this policy is not to prevent individuals with relevant financial relationships from participating; it is intended that such relationships be identified openly so that the audience may form their own judgments about the presentation and the presence of commercial bias with full disclosure of the facts. It remains for the audience to determine whether an individual’s outside interests may reflect a possible bias in either the exposition or the conclusions presented. A C R P Ed i t o r i a l A d v i s o r y B o ard Iris Gorter de Vries, PhD (Chair) Nothing to Disclose Erika J. Stevens, MA (Vice Chair) Nothing to Disclose Dawn Carpenter, BS, MHsc, CCRC Nothing to Disclose Norbert Clemens, MD, PhD Board Member Association of Clinical Research Professionals German Society of Pharmaceutical Medicine Treasurer International Federation of Associations of Pharmaceutical Physicians

Amy Leigh Davis, DBA, MBA Stock Shareholder Baxter International, Cardinal Health, Schering-Plough, Pfizer Marie Fleisner, CMA, CUT Nothing to Disclose Beth Harper, MBA Board Member TrialX.org

Dana Keane, BS, CCRA, CCRP Nothing to Disclose Vicky Parikh, MD, MPH Nothing to Disclose Theresa Straut, BA, CIP, RAC Nothing to Disclose Liz Wool, RN, BSN, CCRA, CMT Board Member Association of Clinical Research Professionals

Honoraria Barnett International Stock Shareholder Centerphase Solutions, Inc.

Franeli Yadao, MSc, BA, CCRA Nothing to Disclose

A C R P S taff / C o n s u lta n t s Barbara A. Bishop, CCRA Nothing to Disclose

A. Veronica Precup Nothing to Disclose

Julie F. Bishop, CCRA Nothing to Disclose

Linda B. Sullivan, MBA Nothing to Disclose

Gary W. Cramer Nothing to Disclose

Celestina Touchet, MBA Nothing to Disclose



home study    87

Questions 1–10 Clinical Metrics 102: Best Practices for the Visualization of Clinical Performance Metrics

1

What is the PRIMARY goal of a clinical metrics system? A. To provide common benchmarks for understanding performance B. To help managers understand trending and enrollment patterns C. To improve performance through datadriven decision-making D. To determine if the research study is under or over budget

2

Why is flexibility critical for a metrics system? 1. We don’t know where our analysis will lead. 2. Different managers have different needs. 3. We need to define our own performance goals. 4. Managers should choose the best chart type for each situation. A. 1 and 2 only B. 2 and 4 only C. 3 and 4 only D. 1 and 3 only

3

Which items should always be included on a chart? 1. A red-yellow-green indicator 2. The date reference or timeframe of the data 3. Original version and current forecast for comparison 4. A way to interact with the chart (e.g., drill down, prompts, or filters) A. 1 and 4 only B. 1 and 2 only C. 3 and 4 only D. 2 and 4 only

4

What is often more important than the actual value of a metric? A. Its trend over time B. The financial cost of the measure C. Comparison to competitors D. Ensuring they don’t change from green to red in one step



88    Monitor august 2012

5

What it the best format for ranking studies by the number of patients? A. A 3-D pie chart B. A bar chart C. A line chart D. A scatterplot chart

6

What is the best option for visually comparing data that have many (>10) categories or classes? A Multiple pie charts B. A table C. A scatterplot chart D. A grouped bar chart

7

What aspect(s) should always be featured in a line chart? A. A time series B. Budget and actuals C. Prompts or filters D. Multiple categories of data

8

What is the best format to visualize relationships between two variables (e.g., quality score versus number of amendments)? A. A pie chart B. A scatterplot chart C. A box and whiskers chart D. A cross-tab table

9

What are the two measures being analyzed on the enrollment runway chart? A. Actual enrollment versus planned enrollment B. Actual enrollment versus elapsed enrollment time percentage C. Forecast enrollment versus screen failures D. Latest forecasted enrollment versus original budgeted enrollment

10

What does the box represent on a box and whiskers chart? A. The most common values B. A 95% confidence interval C. The middle 50% of values D. All the significant data

Questions 11–20 What Gets Measured Gets Fixed: Using Metrics to Make Continuous Progress in R&D Efficiency

11

The biopharma-device industry seems to have many organizational obstacles to research and development success: A. and no one has ever solved these problems. B. but other industries have solved these problems and we can, too. C. but this really is not a problem at all since we get the job done. D. so we should just give up and go home.

12

“What gets measured gets ____”: A. ignored. B. trashed. C. fixed. D. fumigated.

13

When aligned with your strategies and goals, metrics provide:

1. fear. 2. reinforcement. 3. energy. 4. feedback. A. 1 and 2 only B. 3 and 4 only C. 1 and 3 only D. 2 and 4 only

14

What are the four major organizational improvement categories of metrics? A. Financial, People, Systems, Processes B. People, Processes, Teams, Values C. Financial, Customer Satisfaction, Performance, Organizational Growth D. Customer Satisfaction, Supplier Satisfaction, Employee Satisfaction, Leadership

15

What are the categories of performance metrics? A. Cycle time, quality, efficiency B. Timeliness, cycle time, quality, efficiency C. Cost, quality, time D. People, processes, teams, values

16

Timeliness measures are more important during the _______ of a project, whereas cycle time measures are more important during the _________ of a project. A. beginning and end; middle B. middle; beginning and end C. beginning; end D. end; beginning

17

Which of the following are quality metrics, rather than efficiency metrics? 1. Staff hours 2. Queries 3. Amendments 4. Low-enrolling sites A. 1, 2, and 3 only B. 1, 2, and 4 only C. 1, 3, and 4 only D. 2, 3, and 4 only

18

When is it cheapest (lowest cost) to fix a problem? A. Early in the project B. During the middle of the project C. When the problem is full blown D. At the end of the project

19

How many performance metrics should you use (total for timeliness, cycle time, quality, and efficiency)? A. 3 B. 6 C. 9 D. 12

20

Which of these are financial metrics rather than customer satisfaction metrics? 1. Cost per clean data point 2. Site satisfaction 3. Budget accuracy 4. Cost per subject A. 1, 2, and 3 only B. 1, 2, and 4 only C. 1, 3, and 4 only D. 2, 3, and 4 only

Corrections Corrections to Home Studies can be found on the ACRP website and are incorporated directly into the online test.

Questions 21–30 Metrics in Medical Imaging: Changing the Picture

21

Which of the following are types of medical images? 1. Computed tomography 2. Dual energy X-ray absorptiometry 3. Magnetic resonance images 4. Blood pressure sphygmomanometry A. 1, 2, and 3 only B. 1, 2, and 4 only C. 1, 3, and 4 only D. 2, 3, and 4 only

22

In clinical trials, medical imaging is used to evaluate for: 1. efficacy endpoints. 2. safety evaluations. 3. eligibility criteria. 4. diagnostic purposes. A. 1, 2, and 3 only B. 1, 2, and 4 only C. 1, 3, and 4 only D. 2, 3, and 4 only

23

Using imaging performance metrics to monitor image quality: 1. ensures targets assigned to each metric are met. 2. allows appropriate levels of control for the ICL and sponsors. 3. guarantees drug approval. 4. enhances trial performance and quality. A. 1, 2, and 3 only B. 1, 2, and 4 only C. 1, 3, and 4 only D. 2, 3, and 4 only

24

Which of the following are typical services provided by an imaging core lab? 1. Provide imaging scanners to sites 2. Develop imaging review charter 3. Conduct independent read 4. Collect image data A. 1, 2, and 3 only B. 1, 2, and 4 only C. 1, 3, and 4 only D. 2, 3, and 4 only

26

If there are issues with image quality, what can the imaging core lab generate and send to the site for immediate resolution? A. A fine or penalty B. Data clarification form or query C. A new imaging guideline D. Patient information

27

Standardization of image acquisition between sites can usually be accomplished by providing training to each site via: 1. Imaging guidelines 2. Telephone 3. WebEx 4. Newsletter A. 1, 2, and 3 only B. 1, 2, and 4 only C. 1, 3, and 4 only D. 2, 3, and 4 only

28

A key first step to ensure that quality imaging endpoint data are collected for studies is to: A. encourage site modification of imaging guidelines. B. qualify only the sites with the highest enrollments. C. allow no more than one specific scanner manufacturer across sites. D. have standardization of image acquisition between sites.

29

Which of the following image qualities will definitely result in a query? A. Readable (evaluable) both by the technologist and the readers B. Readable but not optimal C. Not readable by both the technologist and the readers D. Readable (evaluable) by the radiologists

30

Why is electronic submission of images preferred over courier submission? A. It reduces the number of queries. B. It increases image quality. C. It reduces the transit time. D. It minimizes site training.

25

Prior to being sent for the radiological evaluation or read, data arrival at the imaging core lab is usually inspected by: A. physicians. B. radiological technologists. C. statisticians. D. regulatory representatives.



home study    89

500 Montgomery Street Suite 800 Alexandria, VA 22314 Tel: (703) 254-8100 Fax: (703) 254-8101 E-mail: [email protected] Website: www.acrpnet.org

Follow us on Twitter: www.twitter.com/ACRPDC Become a fan on Facebook: www.facebook.com/ACRPDC Find us on Linkedin: www.linkedin.com/groupRegistration?gid=46141

A ssociation B oard of T r u st e e s Association Board of Trustees Core Values Integrity We value integrity as the foundation of all our business practices. We value honesty, fairness, and advancing the mission of our organization without compromise. Dedication We value unwavering dedication to the clinical research profession, our stakeholders, and our mission. We strive for caring and passionate leadership in the delivery of effective and ethical programs. Courage We value having the courage to further the vision of the organization. We value the courage to imagine what can be possible, speak what we believe is the truth, to be different, make difficult decisions, pioneer innovative solutions, and adapt quickly to changing dynamics. Communication We value communicating openly, honestly, and knowledgeably. We promote active dialogue with all our stakeholders, fostering a learning environment and recognizing excellence. Service We value serving the clinical research community with commitment and passion. We serve by listening to our stakeholders, understanding their needs, and leading with conviction and humility. We are committed to building trusted and lasting relationships. Chair Clara H. Heering, MSc, MSc Quintiles Vilvoorde, Belgium Vice Chair Gary Shangold, MD Convivotech, LLC, and InteguRx Therapeutics, LLC Califon, NJ Immediate Past Chair Valerie Willetts, RN, BSN, CCRA ASKA Research Vancouver, Canada

Treasurer John D. Irvin, MD, PhD Former Vice President of Merck Research Labs and Senior Vice President of J&J/Merck Marco Island, FL Norbert Clemens, MD, PhD CRS Mannheim GmbH Gruenstadt, Germany Teresa-Lynn (Terri) Hinkley, RN, BScN, MBA, CCRC Consultant Ontario, Canada

Brent Ibata, PhD, JD, MPH, RAC, CCRC Sentara Cardiovascular Research Institute Virginia Beach, VA Jeff Kingsley, DO, MBA, MS, CPI, FAAFP SERRG, Inc. Columbus, GA Dennis J. LaCroix, JD Reading, MA Fernando Martinez, PhD, MBA inVentiv Clinical Solutions Madrid, Spain

Yafit Stark, PhD Teva Pharmaceutical Industries, Ltd. Netanya, Israel Lynn Van Dermark, RN, BSN, MBA, CCRA, RAC MedTrials, Inc. Dallas, TX Liz Wool, RN, BSN, CCRA, CMT QD-Quality and Training Solutions, Inc. San Bruno, CA Nonvoting Member James D. Thomasell, CPA ACRP Executive Director Alexandria, VA

C ommitt e e C h A I R S 2013 Global Conference & Exhibition

Stephen Zalewski, PharmD, CCRC Albuquerque, NM [email protected]

Awards & Susan Coultas, MS Recognition Cardinal Health Specialty Solutions Dallas, TX [email protected] Chapters Laurin Mancour, CCRA, CCRP, RAC Atheneum Consulting, LLC Durham, NC [email protected]

Editorial Advisory Board

Iris Gorter de Vries, PhD Consultant Ternat, Belgium [email protected] Professional Fidela Llorca Morena, MD Development Internal Medicine/Cardiology Salt Lake City, UT [email protected] Ethics Charles M. Alexander, MD, FACP, FACE, CPI (Hon) Merck & Co., Inc. North Wales, PA [email protected]

The contact information provided here is for member use only and may not be used for solic­itation or commercial ­purposes.



90    Monitor august 2012

Finance John D. Irvin, MD, PhD Former Vice President of Merck Research Labs and Senior Vice President of J&J/Merck Marco Island, FL [email protected] Membership Susan Flint, MS, RAC, CCRA, CCRP Navidea Biopharmaceuticals Westford, MA [email protected] Susan Rockwell, MEd Aptiv Solutions Charlottesville, VA [email protected] Nominating



Regulatory Linda Strause, PhD Affairs Vical Incorporated San Diego, CA [email protected]

Ac ademy of cl inic a l re s ear c h P r o f e s s i o n al s b oa r d o f t r u s tees Chair Charles H. Pierce, MSc, MD, PhD, FCP, CPI Cincinnati, OH Immediate Past Chair Albrecht de Vries, MSc, CCRA Janssen Pharmaceutical Companies of J&J Tilburg, The Netherlands Public Member Jeannine Bayard, BSN, MPH St. Paul, MN Kelly M. Craig, MA, BASc, CCRA, APMR Boehringer Ingelheim Canada Ontario, Canada

Deborah L. Rosenbaum, CCRC, CCRA Sarrison Clinical Research, LLC Cary, NC Susan Warne, LVN, CCRC Quintiles Houston, TX Clara H. Heering, MSc, MSc (ex officio) Quintiles Vilvoorde, Belgium Gary Shangold, MD (ex officio) Convivotech, LLC, and InteguRx Therapeutics, LLC Califon, NJ

Committee Chairs Global CCRA Exam Robert J. Greco, BS, RPh, MPH, CCRA Greco Clinical Partners, LLC Mountainside, NJ Global CCRC Exam Jarrod Midboe, BS, CCRC WCCT Global Cypress, CA Global CPI Exam Stephen Louis Kopecky, MD, CPI Mayo Clinic Rochester, MN Nominating Committee Sandra J. O’Donnell, MA, MT(ASCP), CCRA Wilmington, NC

Dennis DeRosia, PA, BS, MA, CCRA Profil Institute for Clinical Research, Inc. Chula Vista, CA

A CR P s ta f f J. Alan Armstrong Chief Operating Officer [email protected] Tel: (703) 254-8107

Gary W. Cramer Associate Editor [email protected] Tel: (703) 258-3504

Kris Lawson Staff Accountant [email protected] Tel: (703) 258-3512

Christopher M. Arnold Director, Finance and Administration [email protected] Tel: (703) 253-6268

Esther Daemen, CPP, CPM Director, Professional Development [email protected] Tel: (703) 254-8100 ext. 3530

Romy Maimon Member Services Representative [email protected] Tel: (703) 258-3517

Megan Bailey Web Editor/Communications Coordinator [email protected] Tel: (703) 253-6277

Nancy Elmahdy Marketing Manager [email protected] Tel: (703) 253-6274

Brannan Meyers Global Community Administrator [email protected] Tel: (703) 253-6276

Scott Garvey Office Administrator [email protected] Tel: (703) 258-3514

David Montgomery, CCRA Manager, Clinical Research Training [email protected] UK Tel: +44 (0) 1753 831906

Jeremy Glunt Marketing Manager [email protected] Tel: (703) 258-3506

Concepcion Morris Member Services Administrator [email protected] Tel: (703) 254-8112

Tiffany Green Member Services Representative [email protected] Tel: (703) 254-8100

A. Veronica Precup Editor-in-Chief [email protected] Tel: (703) 254-8115

Janie C. Hakim Certification Maintenance and Quality Administrator [email protected] Tel: (703) 258-3513

Jenna Rouse Director, Marketing and Communications [email protected] Tel: (703) 254-8109

Megan Balkovic Governance Manager [email protected] Tel: (703) 253-6273 Alexi Battin Global Conference Manager [email protected] Tel: (703) 258-3505 Patricia C. Beeson Director, Human Resources [email protected] Tel: (703) 254-8114 Elice Behr Accounting Manager [email protected] Tel: (703) 258-3510 Thomas Coffey, CPLP Training and Development Manager [email protected] Tel: (703) 253-6281

Morgean Hirt, ACA Director of Certification [email protected] Tel: (703) 254-8104 Sara Kilkenny Associate Director of Volunteer Relations [email protected] Tel: (703) 253-6270

Chris Samuel Training and Development Administrator [email protected] Tel: (703) 253-6271

Matthew Sapurstein Marketing and Communications Administrator [email protected] Tel: (703) 253-6284 Cindy Savery, CMP, CEM Meeting Planner [email protected] Tel: (703) 253-6272 James D. Thomasell, CPA Executive Director [email protected] Tel: (703) 254-8105 Melodie Walker-Edmund Database Manager [email protected] Tel: (703) 253-6267 Lynette Wilhelm Certification Coordinator [email protected] Tel: (703) 254-8103 Jennifer Witebsky Initial Certification and Project Administrator [email protected] Tel: (703) 253-6282 Melyssa Wolf Global Conference Coordinator [email protected] Tel: (703) 258-3508 Shelly Woolsey Membership Manager [email protected] Tel: (703) 254-8108



ACRP    91

ACRP/APCR Uniform Code of Ethics and Professional Conduct The Association of Clinical Research Professionals (ACRP) and the Academy of Physicians in Clinical Research (APCR) are global organizations committed to promoting excellence and professionalism in clinical research and pharmaceutical medicine. ACRP and APCR members are engaged in all aspects of discovery, testing, development and application of drugs, devices and biologics. The scope of their activities encompasses research and medical practice as well as diverse business activities, regulatory affairs, advocacy, education and other professional endeavors. Members and certificants, that is, those who receive special certification from ACRP’s component professional academies (The Academy of Physicians in Clinical Research and The Academy of Clinical Research Professionals), affirm their commitment to upholding the highest standards of personal and professional behavior in the conduct of their endeavors. This Code offers guidance to help them fulfill their commitments to responsibly conduct their professional activities. While pursuing their professional endeavors, all members and certificants shall:  Be mindful and respectful of the important distinctions between medical practice and research.  Accept ensuring the safety and welfare of human subjects and patients as their highest goal.  Execute their work in accordance with standards of scientific objectivity, accountability and professionalism.  Continue to advance their knowledge and understanding of the profession through education and training.  Safeguard the quality and credibility of their professional judgment from inappropriate influence.  Ensure that the principles of respect for persons and the practice of obtaining informed consent are hon-

ored at all times, both in spirit and in practice.  Observe both in spirit and in practice all legal, ethical and regulatory requirements pertaining to confiden-

tiality of identifiable personal information, relevant records and communications.  Avoid conflicts of interest in their affairs and make full disclosure before undertaking any matter that may

be perceived as a conflict of interest.  Adhere to all relevant ethical standards and practices for responsible conduct of research and medical

practice.  Abide by all applicable laws, regulations and official directives applicable to their professional activities

in the legal jurisdiction(s) in which they work and reside, and respect the prevailing ethical and community standards.  Abide by the laws and ethical codes of their respective disciplines.



92    Monitor august 2012

CALL FOR BOARD NOMINATIONS Would you like to be considered for membership on the 2013 ACRP Board of Trustees? The ACRP Nominating Committee is requesting your help in identifying candidates for the 2013 Board of Trustees vacancies. As a Trustee, you will experience opportunities to influence the clinical research profession and will be instrumental in advancing the Association’s mission. In addition to clinical research knowledge, ideal candidates should possess leadership, business, and financial skills. Individuals commit to a two-year term and may be re-elected for a second term. Candidates must be current members in good standing, and appointment requires a significant commitment of time and expertise as well as travel several times per year. If you would like to be considered for a seat or would like to nominate someone else, please visit www.acrpnet.org/boardnominations to review the election process, handbook, and application.

The APCR Nominating Committee is requesting your help in identifying candidates for the 2013 APCR Board of Trustees. With a mission “to advance medical innovation and public health by providing advocacy, promoting competence, and encouraging exchange for and among physicians involved in or affected by clinical research,” the APCR Board holds the governing responsibility to power APCR's success and strengthen the clinical research enterprise. Candidates must be current members in good standing and interested in helping govern our growing organization by supporting efforts to increase membership and expanding the value of the ACE (Advocacy, Competence, Exchange) initiatives. Individuals commit to a two-year term and may be re-elected for a second term. If you would like to be considered for a seat on the APCR Board or would like to nominate another APCR member, please visit www.apcrnet.org/nominations to review the application and handbook. Are you passionate about the Academy’s Certification Program and want to help lead it to even greater heights as a member of the Academy Board of Trustees? For 2013, the Academy Nominating Committee is seeking Board candidates who possess one or more of the following characteristics:  Current CPI® or CCTI®  Has previously served for at least two years as a Member of an Academy Global Certification Exam Committee  Current certificants in good standing  Legal residence is outside North America As an Academy Trustee, you will directly contribute to the strategic, sustainable growth of the Academy’s Certification Program and to the advancement of the Academy’s mission “to advance and promote the professional interests of clinical research professionals by defining, promoting, and maintaining the highest standards and the best practices in the field of clinical research worldwide.” Academy Trustees commit to a three-year term and may be re-elected to a second term. To nominate yourself for the Academy Board, please complete the Academy Trustee application at www.acrpnet.org/boardnominations.

The deadline for nominations for ACRP Board, APCR Board, or Academy Board is 5:00 pm ET on September 30, 2012. Nominations for Board vacancies will be reviewed by the ACRP, APCR, and Academy Nominating Committees, and individuals will be notified prior to the general election in November. Committee nominations will be reviewed by the respective organization’s Board Vice Chair, Committee Staff Liaisons, the Executive Director, and the current committee Chair; candidates will be notified of selection prior to December 31, 2012. Questions? Please contact Sara Kilkenny at [email protected].

Mon ito r

Artic le

s u bmi s s i o n

g u i d e lin e s

ACRP welcomes submissions on topics that are relevant to clinical research professionals globally. Writing an article for The Monitor is an excellent way to boost your professional development, gain recognition, share important information about the latest developments in clinical research with fellow professionals around the world, and help ACRP maintain its role as the leading voice and information resource for clinical research professionals everywhere.

The Peer Review Process The Editorial Advisory Board (EAB) reviews all articles for relevancy, accuracy, organization, objectivity, and integrity. Your proposal or article will be reviewed by two or more members of the EAB in a completely confidential, doubleblind process; that is, you will not know who your reviewers are and they will not know who you are. The time frame for the review process depends on a number of variables, including the availability of reviewers who have the expertise to review the topic presented and the current production schedule. As a result, the review process may take longer than the usual two to four weeks. ACRP cannot guarantee placement in The Monitor, but the EAB considers all submissions seriously and makes every effort to review articles fairly and provide detailed, constructive feedback as needed. In accordance with the peer review guidelines of the International Committee of Medical Journal Editors, the EAB reviewers read each article in an effort to determine if the paper is original and/or scientifically important, if it exhibits brevity and clarity, if it presents adequate interpretation, and if it draws appropriate conclusions. Thus, they address the following questions and indicate whether there is a need for revisions:  Is the point of the article original and/or important, and well-defined? After reading the manuscript, reviewers ask themselves if they have learned something new and if there is a clear conclusion to the article.  Are the data (if any) sound and well controlled? Reviewers will indicate if they feel that inappropriate controls have been used, explaining the reasons for their concerns, and suggesting alternative controls where appropriate.  Is the discussion well balanced and supported by the data? The discussion should be relevant to the point and unbiased. It



94    Monitor August 2012

should not be overly positive or negative. Conclusions should be valid, with reference to other relevant work as applicable. Reviewers will ask the author(s) to provide specific examples if this is not the case.  Have the authors provided references wherever necessary? Reviewers will ask authors to provide references for any statements that require them. When authors have provided references, reviewers will look to see if the reference seems appropriate for the statement.  Do the title and abstract describe the work accurately? The title and abstract are the most frequently read sections of any article; therefore it is vital that they accurately describe it in a clear, balanced manner. Also, the title should be as brief as possible, while still conveying the point in an enticing manner.  Can the writing, organization, tables, and figures be improved? Although the editorial team may also assess the quality of the written English, reviewers will comment if they consider the English in the submission to be below the standard that is expected for The

Monitor. If the manuscript is organized in such a manner that it is illogical or not easily accessible to the reader, reviewers will suggest improvements in a concrete manner. They will also provide feedback on whether any data are presented in the most appropriate manner; for example, if a table is used when a graph would give increased clarity; if the figures are of a high enough quality to be published in their present form; or if numerous text items might be better presented as a bulleted list or in a table.  Are there any ethical, promotional, or competing interest issues? Reviewers will comment if the work seems promotional or commercial in any way. If accepted for publication, articles are published in the next available issue. Submissions may be held for use in an issue that presents many articles on the same theme. See below for the editorial schedule and deadlines for upcoming issues. Note, however, that the EAB will review any article on any clinical research topic any time it is submitted.

Editorial Schedule and Deadlines Issue Deadline Topic February 2012

September 15, 2011

Global Compliance & Oversight

April 2012

November 15, 2011

Trial Management

June 2012

January 15, 2012

The Business of Research

August 2012

March 15, 2012

Performance Metrics

September 2012

April 15, 2012

GCP Revisited

October 2012

June 15, 2012

Research Concerns

December 2012

July 15, 2012

Human Subject Protections

Questions? Contact the editor by e-mail at [email protected] or phone (703) 254-8100.

Authorship criteria Authorship credit should be based on 1. substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data; 2. drafting the article or revising it critically for important intellectual content; and 3. final approval of the version to be published. Authors should meet conditions 1, 2, and 3. All persons designated as authors should qualify for authorship, and all who qualify should be listed as authors. Each author should have participated sufficiently in the work to take public responsibility for appropriate portions of the content. Authors of accepted articles will be required to submit a short biography (up to 100 words), which will include a description of their contribution to the article.

All contributors who do not meet the criteria for authorship should be listed in an acknowledgements section. Examples of those who might be acknowledged include a person who provided purely technical help, writing assistance, or a department chair who provided only general support. Groups of persons who have contributed materially to the paper but whose contributions do not justify authorship may be listed under such headings as “clinical investigators” or “participating investigators,” and their function or contribution should be described—for example, “served as scientific advisors,” “critically reviewed the study proposal,” “collected data,” or “provided and cared for study patients.” Because readers may infer their endorsement of the data and conclusions, these persons must give written permission to be acknowledged.

Submission requirements  Preferred article length: up to 2,500 words, accompanied by an abstract of no more than 150 words.  Submissions must be originals and submitted exclusively to The Monitor. Authors of accepted articles must sign a copyright release, granting ACRP all rights to future publication and distribution in print and electronic forms.  Articles may be based on research, data, new developments, or informational topics. Review articles may be considered, but contact the Editor prior to your submission for guidance.  ACRP reserves the right to edit the content of the article.  Submissions must not be commercial or in any way convey self-interest or promotion.  EAB reviewers may ask the writer to revise the article according to their recommendations.  Insert reference numbers manually within the text. Do not use automatic footnoting and referencing. Reference all sources at the end of the article. The Monitor uses a modified University of Chicago Press reference style. Basically, each reference must list all authors, publication year, article title, and full name of

journal with volume, issue, and page numbers. If the citation is published on the Internet, provide the full URL pathway for readers to access it.  Figures and tables are allowed, but those from previously published material must be submitted with a letter from the author or publisher granting permission to publish in The Monitor. Any fees associated with reprinting must be paid by the author prior to publication of the article in The Monitor.  Electronic images should be high-resolution files (at least 300 to 600 dpi) with captions. The Monitor uses the PeerTrack submission and peer review system. Prospective authors should log in or register (if new to the site) at www.editorialmanager.com/monitor and follow the instructions to fill in the contact information required by the system. You should upload articles in Microsoft Word, 12 point Times Roman, double spaced. Make certain that there is no author information inside the article file(s). The system will assign an article number and convert the file to a pdf, which the author must approve before it is ready for peer review.

Index of advert is e r s Please thank our advertisers for supporting ACRP, and let them know you saw their ad in The Monitor.

advertisers Barnett Educational Services . . . . . . . . . . 3 ExecuPharm, Inc. . . . . . . . . . . . . . . . . . . 50 Forte Research Systems, Inc. . . . . . . . . . 14 FXM Research . . . . . . . . . . . . . . . . . . . . 75 New England IRB . . . . . . . . . . . . . . . . . . 13 University of Chicago Graham School . . 22 Valesta Clinical Research Solutions . . . . . 1

What’s Next? Here’s a look at what’s ahead for the themes of the peer-reviewed articles in The Monitor: September 2012 — Global GCP Revisited — the special public issue will focus on new perspectives on the tenets and regulation of good clinical practice as they relate to the safe and ethical conduct of clinical trials on a global scale October 2012 — Research Concerns — including articles delving into a wide variety of clinical research topics for readers at different skill levels within the profession December 2012 — Human Subject Protections — including articles focusing on the safety of research volunteers—the top priority for responsible clinical research professionals everywhere

Direct any questions to [email protected].



Index of advertisers    95

Cal end a r

of

Ev e n t s

U . S . C l assr o o m C o urs e s

W e b i n ars

ACRP is pleased to announce the Professional Development classroom course schedule for 2012.

L I V E   W e b i n a r s

For additional information, please visit www.acrpnet.org. Fundamentals of Clinical Research September 27–28, 2012

Minneapolis, MN

November 15–16, 2012

Alexandria, VA

Project Management September 27–28, 2012

Minneapolis, MN

November 15–16, 2012

Alexandria, VA

August 22, 2012 So You Wanna be a CRA? Sally Wilging

W e b i n a r

r e p l a y s

Query Writing: Bridging the Gap Between Data Management and Clinical Development Kelly Forester Original Air Date: June 13, 2012 Adverse Event Reporting in the Era of Web 2.0: The Challenges of Having a Two-Way Conversation Elizabeth Garrard Original Air Date: June 6, 2012

U . K . C l assr o o m C o urs e s Visit www.acrpnet.org for specific dates and locations. Fundamentals of Clinical Research (UK) September 2012 November 2012

Preparing Investigator Sites and Research & Development for GCP Inspections August 2012 October 2012

Skills for Monitoring Non-Commercial Clinical Trials August 2012 October 2012

Call for Proposals for the 2012 Live Webinar Series Do you want to share your clinical research expertise with other industry professionals? Do you enjoy public speaking and relish the opportunity to provide your insight and knowledge on a particular topic? Then submit a webinar proposal to ACRP. For details on suggested topics and the webinar proposal process, visit www.acrpnet.org/webinars.



96    Monitor August 2012

Quality Systems/SOPs: Keys to Success at the Site Tiffany Gunneman Original Air Date: May 30, 2012 Increasing Predictable Site Enrollment Success Charles Rathmann and Matthew Lester Original Air Date: May 16, 2012 Vulnerable Subjects: What the Regulations Don’t Say Robert Romanchuk, BS, CCRC Original Air Date: December 14, 2011 Research Misconduct: Lessons Learned and Practical Approaches to Problems Stuart Horowitz, PhD, MBA; Jeffrey Cooper, MD, MMM Original Air Date: November 9, 2011 Reduce or Eliminate Changes of Scope: The Clinical Trial Budget Secret Brenda Reese, BSN, RN, CCRA; Arthur Czech Original Air Date: October 19, 2011 Standard Digital Signatures and How They Can Enhance Clinical Operations Rodd Schlerf, BS Original Air Date: October 5, 2011 Site Selection, Patient Recruitment, and Patient Stipend Management: Industry Data and Insights Joseph Kim, MBA; Samuel Whitaker Original Air Date: August 17, 2011 The ePRO Choice: Understanding the Impact of Recent FDA Guidance on Your ePRO Tool Selection Tim Davis, BSc Original Air Date: August 10, 2011

Are You Making a Difference?

Tell us about you or your colleague’s contributions to the clinical research profession. Submit a Nomination Today www.acrpnet.org/awards