Journal of Clinical Epidemiology
-
(2015)
-
ORIGINAL ARTICLE
An international survey and modified Delphi approach revealed numerous rapid review methods Andrea C. Triccoa,b, Wasifa Zarina, Jesmin Antonya, Brian Huttonc, David Moherc, Diana Sherifalid, Sharon E. Strausa,e,* a
Knowledge Translation Program, Li Ka Shing Knowledge Institute, St. Michael’s Hospital, 209 Victoria Street, Toronto, Ontario M5B 1W8, Canada Epidemiology Division, Dalla Lana School of Public Health, University of Toronto, 6th Floor, 155 College St., Toronto, Ontario M5T 3M7, Canada c Ottawa Methods Centre, Ottawa Hospital Research Institute, The Ottawa Hospital, General Campus, Centre for Practice Changing Research Building, 501 Smyth Road, PO BOX 201B, Ottawa, Ontario K1H 8L6, Canada d School of Nursing, McMaster University, 1280 Main Street WestdHSC 3N24E, Hamilton, Ontario L8S 4K1, Canada e Department of Medicine, Faculty of Medicine, University of Toronto, 27 Kings College Circle, Toronto, Ontario M5S 1A1, Canada b
Accepted 25 August 2015; Published online xxxx
Abstract Objectives: To solicit experiences with and perceptions of rapid reviews from stakeholders, including researchers, policy makers, industry, journal editors, and health care providers. Study Design and Setting: An international survey of rapid review producers and modified Delphi. Results: Forty rapid review producers responded on our survey (63% response rate). Eighty-eight rapid reviews with 31 different names were reported. Rapid review commissioning organizations were predominantly government (78%) and health care (58%) organizations. Several rapid review approaches were identified, including updating the literature search of previous reviews (92%); limiting the search strategy by date of publication (88%); and having only one reviewer screen (85%), abstract data (84%), and assess the quality of studies (86%). The modified Delphi included input from 113 stakeholders on the rapid review approaches from the survey. Approach 1 (search limited by date and language; study selection by one reviewer only, and data abstraction and quality appraisal conducted by one reviewer and one verifier) was ranked the most feasible (72%, 81/113 responses), with the lowest perceived risk of bias (12%, 12/103); it also ranked second in timeliness (37%, 38/102) and fifth in comprehensiveness (5%, 5/100). Conclusion: Rapid reviews have many names and approaches, and some methods might be more desirable than others. Ó 2015 Elsevier Inc. All rights reserved. Keywords: Rapid review; Survey; Systematic review; Delphi; Consensus; Knowledge synthesis
1. Introduction The methods for the conduct of a systematic review are well established [1e4]. Rapid reviews are knowledge synthesis products in which certain aspects of the Conflict of interest: A.C.T. and S.E.S. are on the editorial board for the Journal of Clinical Epidemiology. However, they were not involved with the editorial decisions related to this article. The other authors have no conflict of interest to declare. Funding: The study was funded by a Canadian Institutes of Health Research (CIHR) (grant number DRB-126641) Operating grant. A.C.T. and B.H. hold a CIHR/Drug Safety and Effectiveness Network New Investigator Award, D.M. holds a University of Ottawa Research Chair, and S.E.S. holds a Tier 1 Canada Research Chair in Knowledge Translation. * Corresponding author. Knowledge Translation program, Li Ka Shing Knowledge Institute, St. Michael’s Hospital, 209 Victoria Street, East Building, Room 716, Toronto, Ontario, M5B 1W8, Canada. Tel.: 416864-3068; fax: 416-864-5805. E-mail address:
[email protected] (S.E. Straus). http://dx.doi.org/10.1016/j.jclinepi.2015.08.012 0895-4356/Ó 2015 Elsevier Inc. All rights reserved.
recommended systematic review process are modified or omitted to produce timely information [5]. A formal definition of a rapid review does not exist [5]. However, one definition that has been proposed is ‘‘a rapid review is a type of knowledge synthesis in which components of the systematic review process are simplified or omitted to produce information in a shorter period of time’’ [5]. Numerous centers are conducting rapid reviews internationally. Many health technology assessment agencies are conducting rapid reviews in response to requests from decision-making agencies [6]. For example, the Canadian Agency for Drugs and Technologies in Health (CADTH; www.cadth.ca) has conducted more than 3,000 rapid reviews [7], and the US ECRI Institute (www.ecri.org) has conducted more than 4,000 rapid reviews [8] in the past decade alone. Rapid reviews are increasingly being
2
A.C. Tricco et al. / Journal of Clinical Epidemiology
What is new? Key findings Eighty-eight rapid review products reporting numerous streamlined methods were identified. More than 30 different terms were used to describe a rapid review. The primary rationale for conducting a rapid review was the decision makers’ need for timely access to information. The commissioning agency was often a government agency or health care organization. Through the modified Delphi approach, different issues related to rapid reviews were identified and one rapid review approach (search limited by date and language; study selection by one reviewer only, and data abstraction and quality appraisal conducted by one reviewer and one verifier) was ranked the highest compared to the others, suggesting that some streamlined steps might be more desirable than others. What this adds to what was known? This research provides up-to-date information on the experiences and perceptions of a range of stakeholders regarding rapid reviews. What is the implication and what should change now? Numerous knowledge synthesis centers are conducting rapid reviews internationally, yet few studies have evaluated the accuracy, comprehensiveness, potential risk of bias, timeliness, and feasibility of rapid review approaches. Further research on rapid reviews is warranted, such as the development of formal method guidance for rapid reviews and a prospective study comparing the results of rapid reviews to those obtained through systematic reviews on the same topic is necessary.
published in journals [9e13], including a recent example in the Journal of Clinical Epidemiology [14]. Evidence suggests that decision makers are currently using rapid reviews to inform their decision-making processes. Indeed, surveys of policy makers indicate that evidence from rapid reviews influenced decision making in most cases (O70%) [15e19]. Rapid reviews have been noted as being particularly useful for urgent and emergent decision making [6]. A recent article summarized evidence from 12 review articles of rapid reviews [20]. Inconsistency in definitions, methods, and applications was identified. In a related article,
-
(2015)
-
more than 35 different rapid reviews produced by 20 different organizations were summarized [21]. Four different types of rapid reviews were identified, including inventories, rapid responses, rapid reviews, and automated approaches, which ranged in timeliness from 5 minutes (computer algorithm in which users can enter a query) to 8 months [21]. Although numerous knowledge synthesis centers are conducting rapid reviews internationally, few studies have evaluated the accuracy, comprehensiveness, potential risk of bias, timeliness, and feasibility of rapid review approaches. As rapid reviews are becoming more popular and useful for decision making [22], we aimed to solicit the experiences and perceptions regarding rapid reviews from a wide range of stakeholders, including researchers, policy makers, industry, journal editors, and health care providers. 2. Methods 2.1. Protocol A protocol to conduct an electronic survey and Delphi was compiled and revised on feedback received from the Canadian Institutes for Health Research peer-review panel. It is available from the corresponding author on request. 2.2. Methods for the electronic survey Organizations that produce rapid reviews were identified through the International Network of Agencies for Health Technology Assessment’s (INAHTA) list of members (http://www.inahta.org/) and general Internet searches. A full list of the organizations that were invited to respond is presented in Appendix A at www.jclinepi.com. A 16-item questionnaire was developed based on a previous survey of rapid review producers [6]. Before embarking on the online survey, we assessed face validity and pilot tested our questionnaire by sending it to 10 members of the Knowledge Synthesis Center at St. Michael’s Hospital who were not involved with the survey development. The survey was revised, as necessary, and the final version is presented in Appendix B at www.jclinepi.com. We used the definition of a rapid review put forth by Khangura et al. [5]. We asked participants about the terms they used to name a rapid review, amount of time it typically takes to conduct a rapid review, rationale for undertaking a rapid review, commissioning agency for the rapid review, intended audience for the rapid review, and whether a knowledge user panel is used. Each participant was asked to detail the aforementioned items for up to three unique rapid reviews. We also asked questions regarding the specific methods that were used to conduct the rapid review. The online survey was administered using FluidSurveys (http://fluidsurveys.com) between October 24, 2014, and January 31, 2015. To increase the response rate on the online survey, effective survey methods for performing mail- and Internetbased surveys were used [23e25]. Specifically, participants
A.C. Tricco et al. / Journal of Clinical Epidemiology
were invited via personalized e-mails, including a link to FluidSurveys and a formal invitation letter on letterhead as an attachment. A reminder along with the FluidSurveys link and invitation letter was sent 1 week after the first contact through e-mail to nonrespondents, which was repeated 2 weeks after the first contact via e-mail. The cover letter and survey were sent 3 weeks later through facsimile to nonrespondents. The final contact for the survey entailed sending the cover letter and survey to nonrespondents through postal mail with a priority postreturn envelope. The survey composition was consistent across all three media (Internet, facsimile, and mail). A token of appreciation in the form of a $10 gift certificate from Amazon was offered to survey participants. Data from FluidSurvey were collected automatically, whereas the data from facsimile and postal mail were manually entered into an Excel file by one team member and verified by a second team member. All data were analyzed descriptively using the SAS statistical software (http://www. sas.com/en_ca/software/analytics/stat.html). The response rate calculation for the survey was as follows: number of unique responses received/number of organizations that produce rapid reviews [24]. We included all information from the surveys, including incomplete surveys. 2.3. Methods for the Delphi We sought the perceptions of participants using a modified Delphi approach [26]. The first involved a ranking exercise that was administered using an online environment. For this, we invited researchers, clinicians, nurses, occupational therapists, physiotherapists, health policy makers and managers, journal editors, and funders to participate. Eligible participants (n 5 50) were identified through the research team’s network, general Internet searches, and the INAHTA database. The second involved an in-person discussion at the CADTH Rapid Review Summit [22], which was a 2-day meeting on rapid reviews held in Vancouver (February 2015) and involving members from all sectors of health care, from private industry to health policy makers. For this, we invited all 150 registrants of the CADTH Rapid Review Summit to participate in an online ranking exercise before the summit and in-person discussion, which occurred during the summit. We ensured that participants
-
(2015)
-
3
in the first exercise did not overlap with those who participated in the second one. For both ranking exercises, participants were asked to rank the feasibility, timeliness, comprehensiveness, and potential risk of bias for six rapid review approaches compared to systematic reviews (Table 1). These were identified as being the most frequently used approaches through a scoping review of rapid review methods [27] and our electronic survey. The ranking exercise was assessed for face validity and pilot tested in the same manner as the electronic survey. The final ranking exercise can be found in Appendix C at www.jclinepi.com, which was administered using FluidSurveys (http://fluidsurveys.com) between January 12, 2015, and February 22, 2015. Participants in the two ranking exercises were provided with the overall results from the ranking exercise in FluidSurvey. This entailed providing them with the mode and distribution of ranking scores for each of the six rapid review approaches by the four elements (i.e., feasibility, comprehensiveness, timeliness, and risk of bias). Those who were asked to participate in the first exercise were asked to discuss the ranking results in an online environment. They were then asked to rerank the rapid review approaches through a second survey administered via FluidSurvey between February 12, 2015, and February 20, 2015. Those who participated in the second exercise were asked to discuss the ranking results at the CADTH Rapid Review Summit on February 4, 2015. Finally, participants were asked to rerank the rapid review approaches using polling software during the summit. We summed all the results from the two ranking exercises. Subsequently, the overall score, median, and mode were calculated for each rapid review approach according to the four elements that were evaluated (i.e., feasibility, timeliness, comprehensiveness, and risk of bias).
3. Results 3.1. Response rate A total of 63 rapid review producers were contacted for our survey and 40 responded, giving a response rate of 63% and a 100% completion rate was achieved (i.e., 100% of the 40 participants completed the survey). For the online ranking exercise, 26 of 38 invited participants responded,
Table 1. Six most frequent rapid review approaches identified from the scoping review and survey Approach
Search limit
Screening
Data abstraction
Risk of bias appraisal
1
O1 database, published only
Both date and language
One reviewer
2
Updating the literature search of a previous review, published only O1 database, gray literature O1 database, gray literature O1 database, gray literature O1 database, gray literature
None
One reviewer
One person abstracts, other verifies One reviewer
One person assesses, other verifies Not performed
Both date and language Either date or language Date Both date and language
One One One Two
One One One One
Not performed Not performed One reviewer Not performed
3 4 5 6
Literature search
reviewer reviewer reviewer reviewers
reviewer reviewer reviewer reviewer
4
A.C. Tricco et al. / Journal of Clinical Epidemiology
giving a response rate of 68% and a completion rate of 100%. Finally, for the in-person ranking exercise, 90 of 150 participants responded, resulting in a 60% response rate and the completion rate was 62%. 3.2. Rapid review characteristics from the survey The 88 rapid review products were reported to take from 1 week to 12 months to complete, and 70% of the reviews were reported to have been conducted within 12 weeks (Table 2). The primary commissioning agencies of rapid review products were government agencies and health ministries (78%), followed by health care organizations and hospitals (58%). The target audience was a government agency or health ministry for 83% of the rapid reviews, followed by health care professionals (52%). An advisory panel (61%) and knowledge user (59%) were reported to be present during the conduct of the rapid reviews. The final report was peer reviewed for 69% of the rapid reviews. The rationale for conducting a rapid review vs. a systematic review was because of decision makers’ timelines in 66% of the reports (Table 3).
Eighty-eight rapid review products with 31 unique synthesis names were identified from these responses (Fig. 1). The most frequently used term to describe rapid reviews was ‘‘rapid review’’ (26 times), followed by ‘‘health technology assessments,’’ and ‘‘evidence briefs,’’ both reported nine times. 3.4. Rapid review approaches used Several streamlined rapid review approaches were identified through the survey (Table 4). The most frequently Table 2. Summary of rapid review characteristics identified from the survey
Duration of review (wk) 1e12 12e26 26e36 52 Commissioning agency Government agencies and health ministries Health care organizations, hospitals, and community health agencies Health care professionals Industry Target audience Government agencies and health ministries Health care professionals Patients Researchers Panel type Advisory panel Knowledge user Peer-reviewed report
Count (%) 62 18 6 2
(2015)
-
Table 3. Rationale for conducting rapid review Rationale provided Decision-maker timeline Focused or brief question Lack of resources Increase efficiency (including timeliness) Broad understanding of an area Identify topics requiring a systematic reviews Update a systematic review Well-established intervention Evidence is unclear
Count (%) 57 8 5 4 4 2 2 1 1
(66) (9) (6) (5) (5) (2) (2) (1) (1)
reported approaches were to update the literature search of previous reviews (92%); to limit the search strategy by date of publication (88%); and to have only one reviewer screen title/abstract (88%), screen full-text articles (83%), abstract data (84%), and assess the quality of studies (86%). Of note, 90% of the rapid review products were synthesized using descriptive and narrative summary of the literature, whereas 45% of the respondents reported that their organizations perform some pooling of data or metaanalysis. 3.5. Delphi discussion
3.3. Terminology used to describe the rapid review method
Review characteristics
-
(70) (20) (7) (2)
Participants raised several issues during the online discussion and in-person discussions. Approaches using only one reviewer to screen and/or abstract data were considered feasible. As well, the use of previous systematic reviews as a starting point was rated high in feasibility, but low in comprehensiveness. Comprehensiveness was ranked low if the approach limited search results by date and/or language. Including only published trials were thought to limit comprehensiveness and increase risk of bias. Any approach that failed to assess the risk of bias was considered inherently highly biased. Finally, the timeliness of an approach was often directly linked to feasibility and inversely correlated with comprehensiveness, reflecting the well-known trade-off associated with the use of rapid reviews. 3.6. Delphi exercise ranking results After the online and in-person discussions, participants reranked the six approaches presented in Table 1. The
69 (78) 51 (58) 13 (15) 4 (5) 73 46 19 21
(83) (52) (22) (24)
54 (61) 48 (59) 61 (69)
Fig. 1. Weighted word cloud of terminology reported in the survey to describe a rapid review synthesis method. Terminology with the highest frequency: ‘‘rapid review’’ (n 5 26), ‘‘evidence briefs’’ (n 5 9), and ‘‘health technology assessment (HTA)’’ (n 5 9).
A.C. Tricco et al. / Journal of Clinical Epidemiology Table 4. Summary of rapid review method characteristics identified from the survey Rapid review methods Identifying relevant studies Database searched Used previous review(s) as a starting point Searched one database only No gray literature searched Search filters Filtered by date Filtered by language Filtered by study design Additional searches Reference scanned Contacted authors/experts Selecting relevant studies Eligibility criteria Excluding unpublished material Limiting the review by date Excluding reports written in foreign languages Only including certain study designs (e.g., randomized trials and systematic reviews) Title and abstract screening (L1) Title and abstracts not screened Title and abstracts screened by one reviewer only Title and abstracts screened by one reviewer and verified by another reviewer Title and abstracts screened by one reviewer and a random sample verified by another reviewer Full-text screening (L2) Full text not screened Full text screened by one reviewers only Full text screened by one reviewer and verified by another reviewer Full text screened by one reviewer and a random sample verified by another reviewer Data abstraction and quality appraisal Data abstraction Data abstraction not conducted Data abstracted by one reviewer only Data abstracted by one reviewer and a random sample verified by another reviewer Data abstracted by one reviewer and verified by another reviewer Quality appraisal Quality appraisal performed by one reviewer only Quality appraisal performed by one reviewer and verified by another reviewer Quality appraisal performed by one reviewer and a random sample verified by another reviewer Quality appraisal not performed Data synthesis Data synthesis Narrative summary Pooling of data/meta-analysis
Count (%)
79 (92) 40 (47) 57 (67) 75 (88) 67 (80) 64 (77) 67 (81) 54 (64)
54 74 60 72
(64) (89) (72) (87)
0 (0) 70 (88) 34 (44) 12 (15)
8 (10) 65 (83) 30 (38) 15 (19)
10 (13) 67 (84) 16 (21) 39 (49)
68 (86) 37 (46) 17 (22) 24 (31)
75 (90) 37 (45)
complete results from the two rounds of in-person and online ranking are provided in Appendices D and E. Overall, approaches 1 and 2 were thought to be both highly feasible and timely, whereas approach 1 was also ranked as having the lowest potential risk of bias (Table 5). Both approaches 3 and 4 ranked moderately across all four factors, whereas approaches 5 and 6 were considered highly comprehensive, but ranked low in feasibility and timeliness. Approach 1
-
(2015)
-
5
was thought to be the most feasible approach (72%, n 5 81 of 113 responses), with the lowest perceived risk of bias (12%, n 5 12 of 103), ranking second in timeliness (37%, n 5 38 of 102), and fifth in comprehensiveness (5%, n 5 5 of 100).
4. Discussion Our results suggest that the conduct of rapid reviews is recondite across rapid review producers. Through our study, 80 rapid review products and numerous streamlined methods were reported. Furthermore, many different terms were used to describe a rapid review, suggesting that standardization of terms is warranted. Indeed, Hartling et al. [21] have suggested a taxonomy of rapid review products, which will be useful for producers of rapid reviews. The primary reason for conducting a rapid review was because of decision makers’ short timelines. The commissioning agency was often a government agency or health care organization. Through the ranking exercise, many different issues related to rapid reviews were identified. As well, one rapid review approach was ranked the highest compared to the others, suggesting that some streamlined steps might be more desirable than others. The results of this research can be used as a first step to understand how rapid reviews can be used to balance decision makers’ need for accuracy, as well as timeliness. By enhancing knowledge in this area, stakeholders will obtain a clearer understanding of the impact of streamlining the systematic review process and the degree to which rapid review data can be trusted. They can also use this information to determine whether the rapid review is appropriate for their specific needs. Local and international institutions that are involved in generating rapid reviews can use this data to optimize rapid review methodology. Health care providers and patients will be able to use the results of this project to determine whether the evidence from rapid reviews is reliable. Our study has some limitations. Overall, 62% of participants responded to our international survey and ranking exercises, and respondents may have different experiences and perceptions of rapid reviews compared to nonrespondents. However, our response rate is much higher than a previous similar survey [6] and consistent with expected response rates from Internet surveys [24]. Furthermore, our results are based on self-reported data and might not reflect real-world practices rapid reviews. The design of the survey may not have allowed the identification of multiple factors influencing the decision to conduct a rapid review. Although short cuts are sometimes used in systematic reviews, our study focuses on the intentional conduct of a rapid review. Finally, only a 62% completion rate was observed for the in-person Delphi, but this was because we underestimated the amount of time required to conduct this exercise. However, the responses received
6
A.C. Tricco et al. / Journal of Clinical Epidemiology
-
(2015)
-
Table 5. Reranking results for six rapid review approaches based on scale 6 (very important) and 7 (extremely important) Rapid review approach Approach 1 Literature search: O1 database, published only Search limit: both date and language Relevance screening: one reviewer Data abstraction: one person abstracts, other verifies Risk of bias assessment: one person assesses, other verifies Approach 2 Literature search: updating the literature search of a previous review, published only Search limit: none Relevance screening: one reviewer Data abstraction: one reviewer Risk of bias assessment: not performed Approach 3 Literature search: O1 database, gray literature Search limit: both date and language Relevance screening: one reviewer Data abstraction: one reviewer Risk of bias assessment: not performed Approach 4 Literature search: O1 database, gray literature Search limit: either date or language Relevance screening: one reviewer Data abstraction: one reviewer Risk of bias assessment: not performed Approach 5 Literature search: O1 database, gray literature Search limit: date Relevance screening: one reviewer Data abstraction: one reviewer Risk of bias assessment: one reviewer Approach 6 Literature search: O1 database, gray literature Search limit: both date and language Relevance screening: two independent reviewers Data abstraction: one reviewer Risk of bias assessment: not performed
were consistent with our expectations and the results of the e-Delphi. Because of the exploratory nature of this study, future research on rapid reviews is warranted. In particular, the development of formal method guidance for rapid review producers is required, which is a future endeavor for the study team. As well, team members recently wrote a grant proposal to conduct a diagnostic accuracy study in which the rapid review approach identified as being the most desirable will be used to complete 40 rapid reviews and 40 corresponding full systematic reviews. This proposed research is important because it would help determine the accuracy of rapid review results and provide information concerning how to balance accuracy with the timeliness that is desired by decision makers [5]. This proposed research was the highest ranked research project by O150 participants surveyed at the CADTH Rapid Review Summit, which exemplifies the significance of our proposed research [22]. In conclusion, we identified numerous names and methods for rapid reviews. Some approaches might be
Feasibility
Timeliness
Comprehensiveness
Risk of bias
1
2
5
1
2
1
6
6
3
3
4
3
4
4
3
5
5
5
1
4
6
6
2
2
more desirable than others. Further research in this area is warranted, in particular, a prospective study that empirically compares the results of rapid reviews to systematic reviews.
Acknowledgments The authors thank Dr. Donna Ciliska who provided support and expertise in rapid reviews and knowledge translation on our systematic review protocol. The authors also thank Ana Guzman for formatting the article and administering the survey and ranking exercise. Authors’ contributions: A.C.T. conceived the study, obtained funding for the study, drafted the survey and ranking exercises, pilot tested the survey and ranking exercises, led the in-person discussion, interpreted the data, and wrote the manuscript. W.Z. coordinated the study, revised the survey and ranking exercises, administered the survey and online ranking exercises, analyzed the data, and edited
A.C. Tricco et al. / Journal of Clinical Epidemiology
the manuscript. J.A. coordinated the study, revised the survey and ranking exercises, facilitated the in-person discussion, and edited the manuscript. B.H., D.M., and D.S. helped obtain funding for the study, helped conceive the study, and edited the manuscript. S.E.S. conceived the study, obtained funding for the study, participated in pilot tests of eligibility criteria, moderated the in-person discussion, and edited the manuscript.
[11]
[12]
[13]
Supplementary data Supplementary data related to this article can be found at http://dx.doi.org/10.1016/j.jclinepi.2015.08.012. References [1] Higgins JPT, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0. The Cochrane Collaboration; 2011. [updated March 2011]. Available from www.cochranehandbook.org. [2] Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and metaanalysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ 2015;349:g7647. [3] Moher D, Liberati A, Tetzlaff J, Altman DG, Group P. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 2009;339:b2535. [4] The Cochrane Collaboration. Methodological Expectations of Cochrane Intervention Reviews (MECIR). Cochrane Editorial Unit, 2015. Available at http://editorial-unit.cochrane.org/mecir. Accessed August 1, 2015. [5] Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evidence summaries: the evolution of a rapid review approach. Syst Rev 2012;1:10. [6] Watt A, Cameron A, Sturm L, Lathlean T, Babidge W, Blamey S, et al. Rapid reviews versus full systematic reviews: an inventory of current methods and practice in health technology assessment. Int J Technol Assess Health Care 2008;24:133e9. [7] Polisena J, Kamel C. Rapid Review Programs to support Health Care and Policy Decision Making. Presented at the CADTH Rapid Review Summit hosted by the Canadian Agency for Drugs and Technologies in Health. 2015. Available at http://www.cadth.ca/media/events/JuliePolisena_Chris-Kamel_RR-Programs.pdf. Accessed August 1, 2015 [8] Coates V. Rapid reviews and their impact on future directions for health technology assessment. Presented Rapid Rev Summit hosted by Can Agency Drugs Tech Health. 2015. Available at http://www. cadth.ca/media/events/Vivian-Coates_Keynote.pdf. Accessed August 1, 2015. [9] Munn Z, Lockwood C, Moola S. The development and use of evidence summaries for point of care information systems: a streamlined rapid review approach. Worldviews Evid Based Nurs 2015;12:131e8. [10] Toomey E, Currie-Murphy L, Matthews J, Hurley DA. The effectiveness of physiotherapist-delivered group education and exercise interventions to promote self-management for people with osteoarthritis
[14]
[15] [16]
[17]
[18]
[19]
[20]
[21]
[22]
[23] [24]
[25]
[26]
[27]
-
(2015)
-
7
and chronic low back pain: a rapid review part I. Man Ther 2015; 20:265e86. Parker S, Fuller J. Are nurses well placed as care co-ordinators in primary care and what is needed to develop their role: a rapid review? Health Soc Care Community 2015. http://dx.doi.org/10.1111/hsc. 12194. [Epub ahead of print]. van der Scheer-Horst ES, van Benthem PP, Bruintjes TD, van Leeuwen RB, van der Zaag-Loonen HJ. The efficacy of vestibular rehabilitation in patients with benign paroxysmal positional vertigo: a rapid review. Otolaryngol Head Neck Surg 2014;151:740e5. Treanor CJ, Donnelly M. The late effects of cancer and cancer treatment: a rapid review. J Community Support Oncol 2014;12: 137e48. Cooper CL, Hind D, Duncan R, Walters S, Lartey A, Lee E, et al. A rapid review indicated higher recruitment rates in treatment trials than in prevention trials. J Clin Epidemiol 2015;68:347e54. Hailey D. Health technology assessment. Singapore Med J 2006;47: 187e92. quiz 93. Hailey D. A preliminary survey on the influence of rapid health technology assessments. Int J Technol Assess Health Care 2009;25: 415e8. McGregor M, Brophy JM. End-user involvement in health technology assessment (HTA) development: a way to increase impact. Int J Technol Assess Health Care 2005;21:263e7. Hailey D, Corabian P, Harstall C, Schneider W. The use and impact of rapid health technology assessments. Int J Technol Assess Health Care 2000;16:651e6. Zechmeister I, Schumacher I. The impact of health technology assessment reports on decision making in Austria. Int J Technol Assess Health Care 2012;28:77e84. Featherstone RM, Dryden DM, Foisy M, Guise JM, Mitchell MD, Paynter RA, et al. Advancing knowledge of rapid reviews: an analysis of results, conclusions and recommendations from published review articles examining rapid reviews. Syst Rev 2015;4:50. Hartling L, Guise JM, Kato E, Anderson J, Aronson N, Belinson S, et al. EPC Methods: an exploration of methods and context for the production of rapid reviews. Rockville MD: Agency for Healthcare Research and Quality (US); 2015. CADTH Summit Series. Rapid review summit: then, now and in the future, Summary report. CADTH, Vancouver, British Columbia, 2015. Available at https://www.cadth.ca/sites/default/files/pdf/RR% 20Summit_FINAL_Report.pdf. Accessed August 1, 2015. Kellerman SE, Herold J. Physician response to surveys. A review of the literature. Am J Prev Med 2001;20:61e7. Dillman DA. Internet and interactive voice response surveys. Mail and internet surveys: the tailored design method. New York: New York: John Wiley & Sons Inc.; 2000. Edwards PJ, Roberts I, Clarke MJ, Diguiseppi C, Wentz R, Kwan I, et al. Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev 2009;MR000008. Fink A, Kosecoff J, Chassin M, Brook RH. Consensus methods: characteristics and guidelines for use. Am J Public Health 1984;74: 979e83. Tricco ACAJ, Zarin W, Hutton B, Moher D, Sherifali D, Straus SE. Systematic review of rapid review methods. Presented Rapid Rev Summit hosted by Can Agency Drugs Tech Health, 2015. Available at http:// www.cadth.ca/media/events/Andrea-Tricco_RR-vs-Systematic-Reviews_ Feb-4-2015.pdf. Accessed August 1, 2015.