Progress in Development Studies

5 downloads 0 Views 99KB Size Report
deposited by Peter Townsend and Dennis Marsden in the ESRC's Qualidata archive, 'highlight the relative sterility of our contemporary fieldnotes which.
Progress inhttp://pdj.sagepub.com/ Development Studies

Editorial: As well as the subject: Additional dimensions in development research ethics Laura Camfield and Richard Palmer-Jones Progress in Development Studies 2013 13: 255 DOI: 10.1177/1464993413490474 The online version of this article can be found at: http://pdj.sagepub.com/content/13/4/255

Published by: http://www.sagepublications.com

Additional services and information for Progress in Development Studies can be found at: Email Alerts: http://pdj.sagepub.com/cgi/alerts Subscriptions: http://pdj.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav Citations: http://pdj.sagepub.com/content/13/4/255.refs.html

>> Version of Record - Aug 23, 2013 What is This?

Downloaded from pdj.sagepub.com at University of East Anglia on May 23, 2014

Progress in Development Studies 13, 4 (2013) pp. 255–265

Ý Editorial: As well as the subject: Additional dimensions in development research ethics Laura Camfield School of International Development, University of East Anglia, UK

Richard Palmer-Jones School of International Development, University of East Anglia, UK

The past decade has seen an increasing emphasis on ethical procedures for international development research, drawing heavily on medical models focused on the protection of subjects (for example, informants, vulnerable groups or those in conflict/post-disaster situations; see American Anthropological Association [AAA], 2012; Economic and Social Research Council [ESRC], 2010). Despite this, other dimensions of research ethics seem to us to be relatively neglected, namely, obligations to society, funders and employers and peers (development practitioners, policymakers and researchers). These obligations include doing non-trivial, beneficent and high-quality research. As Iphofen (2009: slide 1) argues, ‘all research contains harm, since it is, to varying degrees, intrusive upon the lives of others. But that intrusion can and should be justified in terms of the benefits accruing – to individuals, communities and/or societies’. Balancing

benefits and harm requires an attention to process, that is, conducting ethical and wellgoverned research; maintaining and sharing accounts of research practices that affect the conclusions that can be drawn from the data and the uses to which they can be put; managing moral obligations to and the expectations of different stakeholders; and producing research outputs that are useful and properly disseminated (see British Sociological Association [BSA], 2003). These obligations are emphasized in the Government Office for Science’s (2007) Universal Ethical Code for Scientists, 1 with its catchwords ‘respect, rigour and responsibility’. While respect for subjects is covered by existing ethical guidelines, rigour and responsibility were previously considered matters for methodologists rather than ethicists. They are particularly relevant to development research, however, since rigour is defined as ‘act[ing]

© 2013 SAGE Publications

10.1177/1464993413490474 Downloaded from pdj.sagepub.com at University of East Anglia on May 23, 2014

256

Development research ethics

with skill and care in all scientific work’, something research in development is sometimes said to be lacking (for example, HarrissWhite and Harriss, 2007: 17; Lewis and Opoku-Mensah, 2006; and Pieterse, 2001: 4, in Mohan and Wilson, 2005), while responsibility covers ‘communicating results and intentions honestly and accurately, and understanding that your work or its outputs will have an impact on society in its broadest sense’ (Government Office for Science, 2007: 1). In other works (Camfield, 2014; Camfield and Palmer-Jones, 2012), we have discussed the extent to which research in international development can be considered rigorous. In this special issue, we look at respect, rigour and responsibility focusing on three areas of ethical practice: (a) relationships with ‘subjects’ (specifically retention in longitudinal studies and restudies or revisits to particular sites); (b) relationships with peers (reanalysis of archived qualitative data, revisits to research sites and replication of influential economic analyses); and (c) relationships with society (the relationship between data and evidence and between research, policy, commentary and advocacy; Manski, 2011). The issue draws on papers2 from a European Association of Development Research Institutes and Development Studies Association conference panel (2011) and a University of East Anglia seminar series in 2011–2012.3 The questions that motivate this issue derive, in part, from concerns with reporting of quantitative work where errors in data production, processing or analysis are difficult to detect in the review process. These errors affect the use that other researchers, practitioners and policymakers can make of the findings. The concern with the quality of reporting is embodied in the growing, but not always popular, practice of replication of quantitative analyses (Hamermesh, 2007, and Duvendack and Palmer-Jones, in this issue). The issue is also motivated by parallel concerns with qualitative research. Golden (1995) argues

that even though qualitative analysis cannot be replicated in the same way as quantitative, the same level of transparency required for quantitative replication (for example, deposit of data and rigorous methodological accounts) should apply to qualitative research. Without this, she claims that ‘field research is akin to religion: a private, esoteric communication, the results of which can only be trusted but never verified’ (Golden, 1995: 482). Golden also suggests that, in relation to qualitative research, ‘knowing in advance that one’s field research may come under the scrutiny of one’s colleagues is likely to induce greater care in the process of research’ (Golden, 1995: 483). However, some qualitative researchers have argued that this requirement might reduce the quality of research. For example, researchers may censor their documentation to prevent their work looking bad as ‘academics build their careers on rational, neat and clean accounts’ (Silva, 2007: 4; see also Broom et al., 2009; Gillies and Edwards, 2011: 24; Hammersley, 1997: 136).4 These accounts might ‘bear only a remote relationship to how the research was actually done, in the way that bureaucracies sometimes produce official records that have little direct relation to how they actually operate’ (Hammersley, 1997: 136). Within international development, the production of sanitized accounts can already be seen in the development of ‘toolkits’ by large-scale research studies, 5 presumably designed with accountability and public scrutiny in mind. The presentation of these toolkits reflects their purpose as an additional form of validation for the research findings rather than an opportunity for reflection and frank discussion with peers. As Silva (2007: 5.7) observes, ‘reflexivity of research processes is generally not lacking. Actually, it is very much because of reflexivity that particular accounts are made of research processes’ (that is, researchers are painfully aware of how their actions might be perceived by others).

Progress in Development Studies 13, 4 (2013) pp. 255–265 Downloaded from pdj.sagepub.com at University of East Anglia on May 23, 2014

Laura Camfield and Richard Palmer-Jones 257 The importance of accurate methodological accounts is brought to the fore by the growth in qualitative as well as quantitative archiving of research materials, albeit in the context of qualitative research in the Global North.6 The archiving of qualitative research materials of classic studies7 and the Mass Observation archives8 have provided resources for secondary analysis (Savage, 2010). However, as Irwin (in this issue) argues, using the example of the Timescapes archive,9 for archived data to be usable, they need to be accompanied by an account that reflects openly on the challenges of data production and processing insofar as they might affect the validity of the authors’ and future analyses.10 Considerable work has been done in this area by Qualidata and linked archives across Europe, as discussed in Camfield and Palmer-Jones (in this issue). These archives contain documents of tremendous historical importance, such as Daniel Bertaux’s study of family mobility and intergenerational transfer of poverty in France and Russia, which provide broader insights into processes of social transformation. As yet, there are few international deposits; however, given the large amount of qualitative data collected in developing countries by ESRC and Department for International Development (DFID)-funded researchers, often in the same geographical locations, we propose that this needs to change. In the remainder of the editorial, we look at these issues in the context of evidence-based policymaking (EBPM) within international development, specifically the translation of findings that may have a weak empirical or analytical base into ‘evidence’, which plays an increasingly important role in development discourse and practice. We conclude by describing how the five articles in the special issue speak to themes of rigour, respect and responsibility in relationships with ‘subjects’, peers and society. Evidence-based medicine, which provides the model for EBPM, gained crucial impetus from Antman et al.’s (1992) comparison of

experts’ recommendations on myocardial infarction and the results of meta-analyses, which apparently showed how informal reviews and reliance on single studies led to the recommendation of ineffective treatments. However, Hammersley (2009: 1) suggests that the argument made by Antman et al. and other proponents of evidence-based medicine, is overstated. Doctors have always made use of evidence of different types, including experience and expert opinion: ‘the claims made for [evidence-based medicine] gave the impression that, previously, doctors had made their treatment decisions by consulting God or flipping a coin’ (Hammersley, 2009: 1; see also Chalmers et al., 2002). Antman et al.’s study was followed by the coining of the phrase ‘evidence-based medicine’ (EBM), and the establishment of the Cochrane Centre in 1992, building on work by Cochrane in the 1970s which focused on experimental evidence for the efficacy of interventions (Sackett et al., 1996). Following the wide uptake of EBM, for example, by the United Kingdom (UK) government’s National Institute for Clinical Excellence, the Campbell Collaboration was established in 2000 to conduct reviews of educational and social interventions, especially in the fields of social policy and criminology.11 In the last few years, the UK DFID also adopted a medical model of evaluation and started a programme of systematic reviews of policy interventions, in partnership with 3ie and AusAID.12 The impetus for DFID’s programme was both global and local in that one of the first commitments of the UK coalition government was to ‘evidence-based aid’ which would be monitored by the Independent Commission on Aid Impact. This commitment was foreshadowed by the ‘new managerialism’ that emerged under the previous government (discussed later), which entailed increasing use of contracting to private and civil society organizations and an expanding evaluation industry to evaluate the outcomes of these contracts. The commitment to evidence,

Progress in Development Studies 13, 4 (2013) pp. 255–265 Downloaded from pdj.sagepub.com at University of East Anglia on May 23, 2014

258

Development research ethics

whilst difficult to challenge13 – who would not want policy to be based primarily on evidence – throws up a number of questions. First, what is evidence and who decides?14 Second, how is evidence produced? Finally, does/can evidence from research inform policy, and is this always beneficial? We address these questions later, starting with epistemology and concluding with examples drawn from international policymaking. In assessing the value of EBPM, the first question to address is: what is evidence? Evidence is defined as ‘the available body of facts or information indicating whether a belief or proposition is true or valid’ (OUP, 2012). By definition, evidence is always ‘of ’ or ‘for’ something and therefore specific to a particular question; it is also always a ‘claim’. On its own, evidence cannot arbitrate questions of value and ethical and moral issues relating to how effects are distributed (Greenhalgh and Russell, 2009). It also cannot address unexpected consequences; the example Hammersley (2009) gives is that grouping secondary school pupils according to their perceived ability not only depresses performance of those in the lower bands but also teaches students that there are fixed and stable differences in academic potential/intelligence. Second, what is the relationship between research evidence and the other resources that policymakers draw on, such as their own and others’ experience, intuition, judgement and tacit knowledge? The role of evidence can be limited since policymakers have to take into account considerations that are not specifically about whether or not a policy is likely to ‘work’, for example, whether they can persuade key stakeholders and how it will affect existing activities. It is certainly not the case – appealing though this idea is – that ‘if we do enough research, we will abolish situations in which the available evidence is irrelevant, ambiguous, uncertain, or conflicting; that evidence from research is value-free and context-neutral; and that such evidence is of greater value than evidence from

personal experience or opinion’(Greenhalgh and Russell, 2009: 308). The reason for this is that decision making in development, as in medicine, is not a purely technical exercise: ‘the role of phronesis, of experience, expertise and judgment, is as important in policymaking as it is in other forms of practice’ (Hammersley, 2009: 8). To suggest otherwise devalues the tacit, embodied knowledge of practitioners (Pope, 2003). Third, how does/can research inform policy?15 Policymaking is a complex and political task involving consideration of different framings of development problems, contextualization of relevant evidence to particular local circumstances, weighing this evidence-incontext against other evidence-in-context and deciding how resources can be fairly allocated. For this reason, ‘a narrowly ‘evidence-based’ framing of policymaking, which posits a linear and direct relationship between evidence and policy, may be inherently unable to explore the complex, context-dependent, and value-laden way in which competing options are negotiated by individuals and interest groups’ (Greenhalgh and Russell, 2009: 304). As can be seen in the examples given later, policymaking is a process of incremental decision making or ‘muddling through’, which involves negotiation across multiple perspectives (Lindblom, 1959). By overstating the role of research evidence in solving what are essentially socio-political problems, that is, taking an engineering approach to epistemology and the role of (social) science in political decision making, political programmes become disguised as science. Lambert (2009) situates EBPM within the new managerialism and audit culture studied by Power (1997), Strathern (2000) and Shore and Wright (1999) (for example, Blair’s ‘new public management’, expressed in the ‘Modernising Government’ white paper, Cabinet Office, 1999). She characterizes it as ‘a radical programme of Weberian-style rationalization [which] has rapidly succeeded in displacing the traditional grounds of expert (medical)

Progress in Development Studies 13, 4 (2013) pp. 255–265 Downloaded from pdj.sagepub.com at University of East Anglia on May 23, 2014

Laura Camfield and Richard Palmer-Jones 259 knowledge – clinical expertise – in favour of a new regime of truth whose legitimate currency is statistical evidence’ (Lambert, 2009: 17). Evidential hierarchies within development have been influenced by Guyatt et al.’s (2000: 1293) hierarchy of evidence which ranges downwards from randomized controlled trials (RCTs) to ‘unsystematic clinical observations’, despite the acknowledged shortcomings of RCTs (Deaton, 2009; Petryna, 2009). This hierarchy appears to reflect the extent to which evidence can be quantified (Sign, 2012) and does not include patients’ experiences, values and preferences (Melamed et al., 2012). While there are discussions over how to incorporate qualitative and narrative evidence – the Cochrane Collaboration has a Qualitative Research Methods Group – these discussions are usually presented as a technical problem rather than an epistemological one. The exclusive focus on the quality/extent of the evidence in a systematic review may be misplaced as an understanding of the environment in which it will be used may be more important in judging its value (Boaz and Pawson, 2005). There is also potential for bias in the collation and use of evidence, illustrated by Boaz and Pawson’s (2005) comparison of five reviews of youth mentoring. They found confident assertions from reviewers based on ambiguous evidence, a common inclination to ‘go beyond the evidence’, and contradictions between the conclusions of the reviews, despite near-identical datasets. The problem with these practices is that widely varying conclusions enable policymakers to pick and choose, creating ‘policy-based evidence’ rather evidence-based policy. Despite the positivistic framing of systematic reviews, there is a subjective element through the use of reviewers’ knowledge, judgement and skill. Hammersley argues that it is impossible to make the process of review completely transparent as readers who do not already have expertise in the relevant area of research, and in literature reviewing, may have difficulty making good use of information about how a review was done...because what they

lack is the tacit knowledge that is embodied in the activities of research and reviewing. (Hammersley, 2009: 5)

This may explain the fetishization of particular features of a study as an indicator of validity, such as whether or not there was a control group, which is neither sufficient, nor necessary, or in many cases possible. A final point relates to the tendency to bring together fundamentally different types of programme in a single systematic review. For example, Gallo’s (1978) critique of the earliest meta-analysis of the efficacy of psychotherapeutic programmes suggested that actual interventions were so dissimilar that ‘apples and oranges’ had been brought together in the calculation of the mean effect. This is obviously problematic in applying this type of evidence, since what practitioners need to know is what types of programme will work with their particular clients. Reviews therefore need to be sufficiently sophisticated to say, ‘for whom, in what circumstances, in what respects, and at what costs are changes brought about?’ (Boaz and Pawson, 2005: 191), rather than suggesting that ‘there is a gold-standard method of research synthesis capable of providing unambiguous verdicts on programmes, and...deliver all-purpose policy advice’ (Boaz and Pawson, 2005: 177). Within a development context, the progress of EBPM may be blocked by some of the obstacles identified in a recent review by the Overseas Development Institute (summarized in Jones, 2012). Jones suggests that previous efforts to improve evidence-based decision making failed because they tried to impose frameworks from other fields, paying insufficient attention to complex challenges faced by development policymakers and practitioners, such as high staff turnover and limited institutional memory. A more fundamental problem which is not unique to development is that decision-making models are often based on an ideal of the policy cycle that is irrelevant for complex and sometimes

Progress in Development Studies 13, 4 (2013) pp. 255–265 Downloaded from pdj.sagepub.com at University of East Anglia on May 23, 2014

260

Development research ethics

politically instigated work, driven by events, news or domestic political agendas. Advocates of EBPM in development need to, instead, find constructive ways to work in the environment of adversarial politics, where unsuccessful work is seen as a major embarrassment with ‘failures’ rarely admitted or learned from (Pritchett, 2002). They also have to tackle the overriding incentive to ‘get money out of the door’, with staff focused on ex ante appraisal and releasing funds, rather than ensuring that programmes actually deliver change. An insight into the political pressures on policymakers comes from Vincent Cable (2003) who described five limitations on the ability of decision makers to pursue an evidence-based approach: speed, superficiality, spin, secrecy and scientific ignorance. There are various implications of these: speed requires policymakers to process information quickly and make decisions without all the necessary information; the superficiality of their knowledge of the areas in which they are working mean that they are heavily dependent on advisers; spin requires them to respond to what is perceived to be the best measure, even if this is not supported by the evidence; and scientific ignorance refers to public suspicion of scientists and scientific evidence. These factors create a perverse incentive structure, which is described in Manski’s (2011) discussion of policymaking in an uncertain world. Manski (2011) suggests that the way research and policymaking are structured encourages certitude, even where this is misplaced. For example, the scientific community rewards strong, novel findings; the public like simple analyses; and pressure is consequently put on researchers to put forward assumptions that are far stronger than they can defend in order to draw strong conclusions (see also Behague and Storeng, 2007). As an illustration of this problem, Manski recounts a perhaps apocryphal, but quite believable, story about an economist’s attempt to describe his uncertainty about a forecast to President Lyndon B. Johnson. The economist presented his forecast as a likely range of values for the quantity under discussion. Johnson is said to have replied,

‘Ranges are for cattle. Give me a number. (Manski, 2011: 4)

Manski (2011) identifies six different ways of presenting evidence which he labels ‘conventional certitude’ (predictions are accepted as true although this is not necessarily the case); ‘duelling certitudes’ (different outcomes emerge under different sets of assumptions, for example, the calculation of poverty lines); ‘conflating science and advocacy’ (for example, the 2007 Lancet series on child development in developing countries); ‘wishful extrapolation’; ‘illogical certitude’; and ‘media overreach’. The first example of conventional certitude is the case of microfinance in developing countries which is discussed in Duvendack and Palmer-Jones (in this issue). Encouragement of certitude may reflect an inflated idea of what tools such as econometrics, meta-analysis and systematic reviews can do for policymakers, that is, ‘that the review will somehow unmask the truth and shed direct light on a tangible policy decision’ (Boaz and Pawson, 2005: 184). The idea that the evidence becomes the policy decision means that ‘when “evidence based” recommendations are propelled forth into the policy community, they often shed the qualifications and scope conditions that follow from the way synthesis has been achieved’ (Boaz and Pawson, 2005: 185). Researchers are also required to make judgements that are not within their competence, for example, deciding whether a ‘small but significant effect’ is an argument for implementation or not. This requirement arises from what Behague (2009: 37) describes as the imposition of ‘an increasingly codified moral code [which] stipulates...that researchers have an obligation to engage with policy, and conduct research in a way that is useful and accessible to clinicians and policy-makers’. This requirement is framed in moral terms even though ‘research structured specifically to be policy-relevant [is] often... used by governmental and non-governmental institutions either as a way of justifying funding

Progress in Development Studies 13, 4 (2013) pp. 255–265 Downloaded from pdj.sagepub.com at University of East Anglia on May 23, 2014

Laura Camfield and Richard Palmer-Jones 261 decisions, or as a way of confirming preexisting ideological and political positions’ (Behague et al., 2009: 1545). We support these points with examples from the work of Behague on global maternal health policy (Behague and Storeng, 2008; Behague et al., 2009) and Sumner and Harpham (2008) on child health. Behague et al. (2009) looked at EBPM in five developing countries to explain why it has had limited impact on maternal health policy development and implementation at the national and sub-national levels. They suggest that EBPM’s emphasis on uniform methodologies works against disciplinary diversity or addressing context-specific problems and that it tends to be used to legitimate rather than inform global policy. As Pope (2003) suggested, EBPM is being used as a form of regulation to align the goals of national governments and international stakeholders where ‘nationally-developed evidence-based policies hold little weight in countering the sways of global donor-driven policy interests, which are themselves legitimated by, rather than driven by, international research’ (Behague et al., 2009: 1542). EBPM is therefore used to justify ‘fad-like’ shifts in policy which do not reflect local needs and tend to replace current policies in a piecemeal way.16 An example of this is the way the influential Makwanpur study 17 of maternal health (Manandhar et al., 2004) was used across the developing world to argue that governments needed to empower communities rather than fund specialist health services and strengthen existing health systems. According to Behague et al. (2009: 1542), ‘interpretations of this study’s findings have been biased by the larger political context within which donors set priorities, distribute limited resources, and make policy decisions’. Behague and Storeng (2008) also looked at the distorting effects of research evidence on policy, showing how research practices inadvertently support vertical initiatives (that is, initiatives that are specialized or disease specific) rather than

complex, systemic interventions, due to the difficulty of producing evidence of effectiveness for broad-based programmes rather than specific clinical interventions (cf. Deaton, 2009). As a result, researchers concentrate on single-component evaluations which generate consensus, are easy to disseminate and appeal to donors. In the context of a ‘dogmatic and detrimental donor demand for experimental evidence’ – what the authors call ‘rigorous answers to irrelevant questions’ – ‘being bold and diverting from experimental designs means opening oneself up to criticism and potentially losing publications, funds and political credibility’ (Behague and Storeng, 2008: 647). Factors shaping the use of evidence are discussed by Sumner and Harpham (2008) who compare the ‘market’ for evidence relating to child health in Andhra Pradesh and Vietnam. They acknowledge that policymaking in Southern contexts is qualitatively different to policy-making solely in Northern contexts due to shifting political contexts and the roles of donors and international aid organizations. In fact, Sumner and Harpham argue that the main factor encouraging use of international evidence in policy processes is external pressure from donors on domestic policymakers. However, as Behague et al. (2009) also argue, this raises further questions about whose evidence counts, as there are competing ‘hierarchies of evidence’, some of which relate to personal connections or to ‘packaging and brand’ rather than methodological rigour, or local contextual knowledge (Sumner and Harpham, 2008: 728). The ethical challenges outlined earlier have been addressed by authors in other fields (for example, Bishop, 2007, in relation to archiving). For this reason, this special issue brings together examples of challenges faced by researchers and practitioners in the field of international development with similar studies in other fields, for example, secondary data analysis and revisiting the sites of previous studies. It reviews emerging practices such as revisits (Arvidson, Crow, and Camfield

Progress in Development Studies 13, 4 (2013) pp. 255–265 Downloaded from pdj.sagepub.com at University of East Anglia on May 23, 2014

262

Development research ethics

and Palmer-Jones, in this issue); replication of economic analyses (Duvendack and PalmerJones, in this issue); and archiving and reuse of qualitative data (Irwin, and Camfield and Palmer-Jones, in this issue). The articles are grouped under two headings: first, relationships with ‘subjects’, addressing retention of respondents and research site revisits; and second, relationships with peers/society, covering reanalysis and replication. Most of the articles have an explicitly international focus, for example, the ethical dimensions of ‘immersions’ in Bangladesh (Arvidson, in this issue) and challenges of replicability in microeconometric analyses in development (Duvendack and Palmer-Jones, in this issue). All address methodological issues that are central to development research. Now, we provide a short introduction to the articles. The first article by Graham Crow draws attention to the reflections of research subjects on their experiences of both the original study and subsequent revisit. He suggests that community restudies can inform our understanding of what members of communities want and expect from research (for example, a non-judgemental memorialization of their way of life). They can also expose the extent to which these expectations are realized and realizable. Malin Arvidson picks up the theme of intimacy and distance in research relationships based on her experience of the Reality Check Approach, a longitudinal qualitative study in Bangladesh. She contrasts formal and limited understandings of research ethics with the challenges posed by friendship, communal norms and patronage, which are not easily resolved by reference to ethical guidelines. Building on Crow’s discussion of the appropriate relationship of researcher and researched, she suggests that maintaining emotional distance need not prevent empathy, understanding and the obtaining of good-quality data, but can instead enable respectful and bounded research relationships and give the researcher more space for analysis.

Moving on to the second of our themes – relationships with peers/society – Irwin explores some of the ethical and epistemological challenges entailed by concern for the interests of peers in secondary analysis of qualitative data. These include engaging with the contextually embedded nature of data, including the context of the research design and disciplinary assumptions under which it was produced. She illustrates these challenges using analyses of young people’s expectations about accessing higher education in the UK and gendered experiences of time pressure among young parents. Secondary analysis of quantitative data is explored by Duvendack and Palmer-Jones who describe the benefits of replications of influential studies in international development and the challenges faced by would-be replicators. The benefits are set in a context of increased adoption of medical models of evidence-based policy in development, evidenced by promotion of RCTs, systematic reviews and meta-analysis. The aim of these, as discussed earlier, is to find what policies and interventions work in development. However, despite the potential of replication as a means of answering these questions (for example, Duvendack and Palmer-Jones [article in this issue] challenge to the mythology of microfinance), there are few published replications. As Duvendack and PalmerJones explain, the reasons for the paucity of replications range from lack of incentives to undertake them due to the importance of ‘original’ journal publications in academic careers and reluctance to challenge professional peers or mainstream paradigms. Finally, Camfield and Palmer-Jones provide an overview of practice in two main areas: restudies or revisits to field sites as a form of qualitative replication (see also Crow, in this issue) and secondary qualitative data analysis (see also Irwin, in this issue), drawing out the ethical implications of what might appear to be primarily methodological concerns. Notes 1. The code suggests that a shared set of values and responsibilities applies to ‘anyone whose work

Progress in Development Studies 13, 4 (2013) pp. 255–265 Downloaded from pdj.sagepub.com at University of East Anglia on May 23, 2014

Laura Camfield and Richard Palmer-Jones 263

2.

3. 4.

5.

6. 7. 8. 9. 10.

11. 12. 13.

14.

15. 16.

uses scientific methods, including social, natural, medical and veterinary sciences, engineering and mathematics’ (Government Office for Science, 2007: 2). Some of the papers presented in the seminar series will also be published in a Journal of Development Studies special section. See http://www.uea.ac.uk/dev/ethicalanalysis (last accessed on 29 May 2013). Censoring accounts of practice also prevents future researchers from gaining insights into method and the process of research from ‘bad’ (uncensored) practice (Savage, 2010). Gillies and Edwards (2011: 24) suggest that the original fieldnotes deposited by Peter Townsend and Dennis Marsden in the ESRC’s Qualidata archive, ‘highlight the relative sterility of our contemporary fieldnotes which are routinely self censored...[which] raises some interesting questions around what is considered good research practice’. For example, see www.younglives.org.uk/what-wedo/research-methods/methods-guide (last accessed on 29 May 2013). See http://www.esds.ac.uk/qualidata/about/ introduction.asp (last accessed on 29 May 2013). See http://www.esds.ac.uk/qualidata/pioneers/ (last accessed on 29 May 2013). See http://www.massobs.org.uk/index.htm (last accessed on 29 May 2013). See http://www.timescapes.leeds.ac.uk/archive/ (last accessed on 29 May 2013). For example, the incorporation of qualitative material in the systematic reviews that are intended to guide policymaking within the development sector, such as https://www.gov.uk/government/news/dfidresearch-systematic-review-database-just-one-clickaway (last accessed on 29 May 2013). See www.campbellcollaboration.org/background/ index.php/ (last accessed on 29 May 2013). See http://devpolicy.org/the-uks-ten-point-plan-forbetter-aid/ (last accessed on 29 May 2013). Nonetheless, Hammersley (2009: 9) observes that ‘the notion of evidence-based practice has not been supported by the sort of evidence on which it claims decisions ought to be made: there has been no demonstration that it “works” better than alternatives’. Hammersley (2010: 4.6) helpfully distinguishes between data, which is ‘collected or generated as a resource’, and ‘evidence’, which refers to ‘what is eventually used as grounds for inference to research conclusions in publications’. See Flyvbjerg’s (1998) case study of town planning in Denmark. See Mosse (2009) for faddish shifts in another arena of development policy.

17. The Makwanpur study suggested that women’s community groups and peer-to-peer education may effectively reduce both neonatal and maternal mortality.

References American Anthropological Association (AAA) 2012: http://www.aaanet.org/coe/Code_of_Ethics.pdf, last accessed on 23 October 2012. Antman, E.M., Lau, J., Kupelnick, B., et al. 1992: A comparison of results of meta-analysis of randomized controlled trials and recommendations of clinical experts. Treatment for myocardial infarction. Journal of the American Medical Association 268, 240–48. Behague, D. 2009: The epistemological ethics of research in global health, pp. 36–37, http://www.yale.edu/ macmillan/smaconference/SessionAbstracts.pdf, last accessed on 23 October 2012. Behague, D. and Storeng, K. 2007: An ethnography of evidence-based policy-making in international maternal health, Funded by The Economic and Research Council RES-000-22-1039, http:// mp.lshtm.ac.uk/ESRC.html, last accessed on August 2012. ——— 2008: Collapsing the vertical–horizontal divide: An ethnographic study of evidence-based policymaking in maternal health. American Journal of Public Health 984, 644–49. Behague, D., Tawiah, C., Rosato, M., Somed, T. and Morrison, J. 2009: Evidence-based policymaking: The implications of globally-applicable research for context-specific problem-solving in developing countries. Social Science and Medicine 69, 1539–46. Bishop, L. 2007: A reflexive account of reusing qualitative data: Beyond primary/secondary dualism. Sociological Research Online 123, http://www.socresonline.org. uk/12/3/2.html, last accessed on 29 May 2013. Boaz, A. and Pawson, R. 2005: The arduous road from evidence to policy: Five journeys compared. Journal of Social Policy 34, 173–94. British Sociological Association (BSA) 2003: Statement of ethical practice for the British Sociological Association, www.britsoc.co.uk/ media/27107/StatementofEthicalPractice.pdf, last accessed on August 2012. Broom, A., Cheshire, L. and Emmison, M. 2009: Qualitative researchers’ understandings of their practice and the implications for data archiving and sharing. Sociology 436, 1163–80. Cabinet Office 1999: Modernising government. The Stationery Office, London. Cable, V. 2003: Evidence and UK politics. Transcript of presentation as part of an ODI Meeting Series on ‘Does Evidence Matter?’

Progress in Development Studies 13, 4 (2013) pp. 255–265 Downloaded from pdj.sagepub.com at University of East Anglia on May 23, 2014

264

Development research ethics

Camfield, L. 2014: Introduction. In Research in international development: A critical review. Palgrave Macmillan, forthcoming. Camfield, L. and Palmer-Jones, R. 2012: Three ‘Rs’ of development econometrics: Repetition, reproduction and replication. Journal of Development Studies Special Section (in press). Chalmers, I., Hedges, L.V. and Cooper, H. 2002: A brief history of research synthesis. Evaluation and the Health Professions 251, 12–37. Deaton, A. 2009: Instruments of development: Randomization in the tropics, and the search for the elusive keys to economic development. The Keynes Lecture, British Academy, 9 October 2008. Economic and Social Research Council (ESRC). Framework for research ethics, 2010, http://www. esrc.ac.uk/_images/Framework_for_Research_ Ethics_tcm8-4586.pdf, last accessed on August 2012. Flyvbjerg, B. 1998: Rationality and power: Democracy in practice. University of Chicago Press. Gallo, P. S., Jr 1978: Meta-analysis-A mixed metaphor? American Psychologist 33, 515–17. Gillies, V. and Edwards, R. 2011: An historical comparative analysis of family and parenting: A feasibility study across sources and timeframes. Working Paper No. 24, Families and Social Capital Research Group. Golden, M.A. 1995: Replication and non-quantitative research. PS: Political Science and Politics 28, 481–83. Government Office for Science 2007: Rigour, respect, responsibility: A universal ethical code for scientists. UK Department for Innovation, Universities and Skills. Greenhalgh, T. and Russell, J. 2009: Evidence-based policy: A critique. Perspectives in Biology and Medicine 52, 304–18. Guyatt, G.H., Haynes, R.B., Jaeschke, R.Z., Cook, D.J., Green, L., Naylor, C.D., Wilson, M.C. and Richardson, W.S. 2000: Users’ guides to the medical literature: XXV. Evidence-based medicine: Principles for applying the users’ guides to patient care. Evidence Based Medicine Working Group. JAMA 284, 1290–96. Hamermesh, D.S. 2007: Viewpoint: Replication in economics. Canadian Journal of Economics 40, 715–33. Hammersley, M. 1997: Qualitative data archiving: Some reflections on its prospects and problems. Sociology 311, 131–42. ——— 2009: Against the ethicists: On the evils of ethical regulation. International Journal of Social Research Methodology 12, 211–25. Harriss-White, B. and Harriss, J. 2007: Green revolution and after: The ‘North Arcot papers’ and long-term studies of the political economy of rural development in south India. QEH Working Paper No. 146. Iphofen, R. 2009: Ethical review – Barrier or facilitator to research? AREC Winter Conference.

Jones, H. 2012: Promoting evidence-informed decisionmaking in development agencies. ODI Background Note, London. Lambert, H. 2009: Evidentiary truths? The evidence of anthropology through the anthropology of medical evidence. Anthropology Today 251, 16–20. Lewis, D. and Opoku-Mensah, P. 2006: Moving forward research agendas on international NGOs: Theory, agency and context. Journal of International Development 18, 1–11. Lindblom, C.E. 1959: The science of ‘muddling through’. Public Administration Review 19, 79–88. Manandhar, D.S., Osrin, D., Shrestha, B.P., Mesko, N., Morrison, J., Tumbahangphe, K.M., Tamang, S., Thapa, S., Shrestha, D., Thapa, B., Shrestha, J.R., Wade, A., Borghi, J., Standing, H., Manandhar, M., Costello, A.M.L. and members of the MIRA Makwanpur trial team 2004: Effect of participatory intervention with women’s groups on birth outcomes in Nepal: Cluster randomized control trial. Lancet 364, 970–79. Manski, C. 2011: Policy analysis with incredible certitude. Economic Journal, Royal Economic Society 121, F261–89, 08. Melamed, C., Devlin, N. and Appleby, J. 2012: ‘Valuing development’: Could approaches to measuring outcomes in health help make development more accountable? Research reports and studies, Overseas Development Institute, London. Mohan, G. and Wilson, G. 2005: The antagonistic relevance of development studies. Progress in Development Studies 5, 261–78. Mosse, D. 2004: Cultivating development: An ethnography of aid policy and practice. Pluto Press. Mosse, David 2009: Politics and ethics: Ethnographies of expert knowledge and professional identities. In Wright, Susan and Shore, Cris, editors, Policy worlds. Berghahn (in press). Oxford University Press (OUP) 2012: Oxford English dictionary. Petryna, A. 2009: When experiments travel: Clinical trials and the global search for human subjects. Princeton University Press. Pieterse, N. 2001: Development theory: Deconstructions/ reconstructions. SAGE. Pope, C. 2003: Resisting evidence: The study of evidence-based medicine as a contemporary social movement. Health 7, 267–82. Power, M. 1997: From risk society to audit society. Soziale Systeme 3, 3–21. Pritchett, L. 2002: It pays to be ignorant: A simple political economy of rigorous program evaluation. The Journal of Policy Reform 5, 251–69. Sackett, D.L., Rosenberg, W.M., Gray, J.A., Haynes, R.B. and Richardson, W.S. 1996:

Progress in Development Studies 13, 4 (2013) pp. 255–265 Downloaded from pdj.sagepub.com at University of East Anglia on May 23, 2014

Laura Camfield and Richard Palmer-Jones 265 Evidence based medicine: What it is and what it isn’t. BMJ 13; 312, 71–72. Savage, M. 2010: Identities and social change in Britain since 1940: The politics of method. Clarendon. Shore, C. and Wright, S. 1999: Audit culture and anthropology: Neo-liberalism in British higher education. Journal of the Royal Anthropological Institute 5, 557–75. Sign 2012: Annex B: Key to evidence statement and grades of recommendations, levels of evidence, http://www.sign.ac.uk/guidelines/fulltext/50/ annexb.html, retrieved, last accessed on August 2012.

Silva, E.B. 2007: What’s [yet] to be seen? Re-using qualitative data. Sociological Research Online 12, www.socresonline.org.uk/12/3/4.html, last accessed on 29 May 2013. Strathern, M. 2000: Audit cultures. Anthropological studies in accountability, ethics and the academy. Routledge. Sumner, A. and Harpham, T. 2008: The market for ‘evidence’ in policy processes: The case of child health policy in Andhra Pradesh, India and Vietnam. European Journal of Development Research 20, 712–32.

Progress in Development Studies 13, 4 (2013) pp. 255–265 Downloaded from pdj.sagepub.com at University of East Anglia on May 23, 2014

Downloaded from pdj.sagepub.com at University of East Anglia on May 23, 2014