Opening the black box: Presenting a model for ...

1 downloads 0 Views 379KB Size Report
Jul 17, 2012 - Opening the black box: Presenting a model for evaluating organizational-level interventions. Karina Nielsen a. & Raymond Randall b a.
This article was downloaded by: [Arbejdsmiljo Instituttet] On: 22 July 2012, At: 23:33 Publisher: Psychology Press Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

European Journal of Work and Organizational Psychology Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/pewo20

Opening the black box: Presenting a model for evaluating organizational-level interventions a

Karina Nielsen & Raymond Randall

b

a

National Research Centre for the Working Environment, Copenhagen, Denmark

b

Department of Psychology, University of Leicester, Leicester, UK

Version of record first published: 17 Jul 2012

To cite this article: Karina Nielsen & Raymond Randall (2012): Opening the black box: Presenting a model for evaluating organizational-level interventions, European Journal of Work and Organizational Psychology, DOI:10.1080/1359432X.2012.690556 To link to this article: http://dx.doi.org/10.1080/1359432X.2012.690556

PLEASE SCROLL DOWN FOR ARTICLE For full terms and conditions of use, see: http://www.tandfonline.com/page/terms-and-conditions esp. Part II. Intellectual property and access and license types, § 11. (c) Open Access Content The use of Taylor & Francis Open articles and Taylor & Francis Open Select articles for commercial purposes is strictly prohibited. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.

EUROPEAN JOURNAL OF WORK AND ORGANIZATIONAL PSYCHOLOGY 2012, 1–17, iFirst article

Opening the black box: Presenting a model for evaluating organizational-level interventions

Downloaded by [Arbejdsmiljo Instituttet] at 23:33 22 July 2012

Karina Nielsen1 and Raymond Randall2 1

National Research Centre for the Working Environment, Copenhagen, Denmark

2

Department of Psychology, University of Leicester, Leicester, UK Organizational-level occupational health interventions are often recommended when improvements in working conditions, employee health, and well-being are sought within organizations. Research has revealed that these interventions result in inconsistent effects despite being based on theoretical frameworks. This inconsistency indicates that intervention studies need to be designed to examine directly how and why such interventions bring about change and why they sometimes fail. We argue that intervention studies should include a process evaluation that includes a close examination of the psychological and organizational mechanisms that hinder and facilitate desired intervention outcomes. By drawing on existing intervention literature we present an evidence-based model containing three levels of elements that appear to be crucial in process evaluation. We describe how this model may be applied and developed in future research to identify better the mechanisms that link intervention processes to intervention outcomes. Keywords: Methods; Organizational-level occupational health interventions; Process evaluation.

Organizational-level occupational health interventions can be defined as: ‘‘planned, behavioral, science-based actions to remove or modify the causes of job stress’’ (Mikkelsen, 2005, p. 152). In current European legislation there is a clear emphasis on the use of such interventions (i.e., changes in the design, organization, and management of work) as the preferred way of improving working conditions and tackling problems such as work stress (EU-OSHA, 2010). This strategy has, however, been criticized because of the lack of a large and consistent body of evidence that shows these interventions to have a positive impact on working conditions and employee health and well-being (e.g., Briner & Reynolds, 1999; Richardson & Rothstein, 2008). Others have argued that there is evidence of the positive impact of organizational-level occupational health interventions but that too few studies have examined why and how such interventions have succeeded or failed thus placing limits on the external validity of much of

the small body of evaluation research (Egan, Bambra, Petticrew, & Whitehead, 2009; Murta, Sanderson, & Oldenburg, 2007; Semmer, 2011). In their review, LaMontagne, Keegel, Louie, Ostry, and Landsbergis (2007) concluded that the published literature focused on effect evaluation and contained relatively little, potentially important, process evaluation data about how interventions were planned and implemented. In studies where process evaluation is attempted, it is often based on anecdotal data that have not been subjected to structured analysis (Bambra, Egan, Thomas, Petticrew, & Whitehead, 2007; Murta et al., 2007; Roen, Arai, Roberts, & Popay, 2006). In this article we present a three-level evidence-based process evaluation model. This is intended to provide a structure that researchers can use to guide the rigorous collection of detailed process evaluation data. We argue that the model can be used to strengthen the evaluation of organizational-level occupational health interventions in a number of ways.

Correspondence should be addressed to Karina Nielsen, National Research Centre for the Working Environment, Lersoe Park Alle 105, DK-2100 Copenhagen, Denmark. Email: [email protected]. Raymond Randall’s current affiliation is the School of Business and Economics, Loughborough University, Loughborough, UK. This research was funded by the National Work Environment Research Fund (Grant 16-2004-09). The authors declared no potential conflicts of interests with respects to the authorship and/or the publication of this article. Ó 2012 Karina Nielsen http://www.psypress.com/ejwop

http://dx.doi.org/10.1080/1359432X.2012.690556

Downloaded by [Arbejdsmiljo Instituttet] at 23:33 22 July 2012

2

NIELSEN AND RANDALL

Organizational-level occupational health interventions are proactive in that they are focused on reducing or eliminating the sources of job stress (Hurrell & Murphy, 1996; LaMontagne et al., 2007). Researchers have found that some show no effects, others have been linked to improvements in working conditions and employee health and well-being and a small number have prefaced deteriorations in these variables (Bambra et al., 2007; Egan et al., 2007; Semmer, 2003, 2006). This mixed evidence has led to considerable confusion and debate about how research findings should be used to guide practice. The problem stems from a prevailing focus on effect only evaluation (Ruotsalainen et al., 2006). It is difficult to conclude why and how an intervention worked from effect evaluation data only (Lipsey, 1996; Rychetnik, Frommer, & Shiell, 2002) because effect-only evaluation data masks intervention effects that are sensitive to variations in intervention processes (Lipsey, 1996). The nature of organizational-level occupational health interventions indicates that their working mechanisms are unlikely to be separate from the systems within which they operate. These interventions require changes to complex social systems and may be met with much resistance and have unintended side-effects (Semmer, 2003). The internal validity of evaluation studies of these interventions may be threatened by concurrent changes that worked against the intervention plan: This means that the same intervention could yield powerful effects if the context was less disruptive to the intervention plans and processes (Mikkelsen, 2005). Moreover, changing complex systems requires wide-ranging multifaceted activities and diluted or disrupted intervention activities may not be intense enough to have an impact (Bambra et al., 2007). Considering the complexity of organizational-level occupational health interventions, evaluation models and methods are needed that can be used to identify how the potential effects of interventions on health and well-being are moderated and mediated by intervention processes. Such a shift in evaluation strategy represents a move away from ‘‘black box’’ and a move towards an approach that can elaborate on the mechanisms through which changes in the outcomes operate: Looking inside the black box reveals various sources of variation (Lipsey & Cordray, 2000). A fundamental objective of this shift is to differentiate between theory/ programme failure (that the theory behind the intervention did not address the problem) and implementation failure (that the way the intervention was implemented was incomplete or designed in such a way that the intervention would have failed even if the theory behind the intervention was correct) (Harachi, Abbott, Catalan, Haggerty, & Fleming, 1999). Intervention process has been defined as ‘‘individual, collective and management perceptions and

actions in implementing any intervention and their influence on the overall result of the intervention’’ (Nytrø, Saksvik, Mikkelsen, Bohle, & Quinlan, 2000, p. 214). This means that process evaluation (PE) may be used to (1) provide feedback for improving interventions, (2) replicate interventions in other settings minimizing the number of pitfalls associated with a given intervention, (3) interpret the outcomes of interventions (Goldenhar, LaMontagne, Katz, Heaney, & Landsbergis, 2001), and (4) help us conclude on the generalizability, applicability, and transferability of interventions studies (Armstrong et al., 2008). In summary, PE is needed to evaluate the generalizability of an intervention (to answer questions such as ‘‘Under which circumstances will an intervention work?’’ and ‘‘Which were the processes that facilitated the change?’’) so that it can be implemented successfully in a variety of settings (Cooper, Dewe, & O’Driscoll, 2001). One of the reasons why PE is lacking in current research on organizational-level occupational health interventions may be that researchers are uncertain about what should be included in such evaluation. A number of papers have listed concepts that may be examined in PE (Egan et al., 2009; Lipsey & Cordray, 2000; Murta et al., 2007; Nytrø et al., 2000). These include organizational contexts, intervention reach, dose delivered and dose received, intervention fidelity, support and available resources, recruitment, and attitudes towards the intervention. However, there is no integrated, evidence-based framework that describes the elements that need to be included in process evaluations of organizational-level occupational health interventions. We propose that the factors that may have an impact on the outcomes of an organizational-level occupational health intervention can be grouped into three themes. The themes are: the intervention design and implementation, the intervention contexts, and participants’ mental models (of the intervention and their work situation). The first theme determines the maximum levels of intervention exposure that can be achieved; the latter two represent the factors that may moderate or mediate the link between any intervention exposure and its outcomes. Within each theme we identify a number of specific questions about the intervention processes that need to be answered through the collection of process evaluation data. Together, the collection of data across these themes is likely to provide the evaluator with some useful insights into the factors that influence the outcomes of an organizational-level occupational health intervention. In this article we have chosen to focus on the literature on health and well-being interventions at the organizational level and the factors and elements identified in this literature. This is because PE models are needed that fit with the measurement

Downloaded by [Arbejdsmiljo Instituttet] at 23:33 22 July 2012

PROCESS EVALUATION TOOL

opportunities and constraints operating in this domain; however, models of PE have been used with considerable success in other disciplines such as public health, organizational development, and organizational change (Armenakis & Bedeian, 1999; Burke & Litwin, 1992; Cummings & Worley, 2009; Rossi, Lipsey, & Freeman, 2004; Steckler & Linnan, 2002), and many of the factors and elements identified in these disciplines may also be relevant in a model of the evaluation of the processes of organizational-level occupational health interventions. In developing a model specific to organizational-level occupational health interventions, we drew on four sources of information. First, we identified two rigorous review articles focusing on process factors (Egan et al., 2009; Murta et al., 2007) and from here we drew the relevant factors identified in our model. Second, we identified a number of papers that focused on the topic of intervention implementation (Cooper et al., 2001; Guastello, 1993; Lipsey, 1996; Nytrø et al., 2000; Pettigrew, 1990; Semmer, 2003, 2006, 2011; Shannon, Robson, & Guastello, 1999; Vedung, 2006). Third, we reviewed existing intervention studies to identify any analysis, however anecdotal, of process factors such as mental models, context, and/or intervention design and implementation. These are the papers that are referred to throughout this article. Finally, we developed a preliminary model which we tested in semistructured interviews (N ¼ 54) with key stakeholders including human resources practitioners, internal consultants, managers, and 44 focus groups

3

with employees in two organizations, one privateand one public-sector, to identify further themes and confirm existing themes (references withheld to ensure anonymity).

A MODEL OF PROCESS EVALUATION Intervention design and implementation In the following section we discuss the elements that should be included to document the intervention design and implementation. We focus on three overarching elements: initiation, intervention activities, and implementation strategy. These themes are not orthogonal: Because we are describing interlinked and complex organizational processes, issues within a theme may also interact with other issues in other themes. The model is presented in Figure 1. To allow readers to see more clearly how the model may be translated into PE tools and methods, the key issues are presented as questions that should be addressed during PE. Initiation: Who initiated the intervention and for what purpose? The motivation driving an intervention may be related to problems internal to the organization (to deal with a crisis, to improve quality and productivity or to become a healthy workplace) or external challenges (e.g., legislative requirements) or a combination of both (Kompier, Geurts, Grundemann, Vink, & Smulders, 1998; Shannon & Cole, 2004). Any intervention can stabilize or displace current power structures and

Figure 1. Model of process evaluation.

Downloaded by [Arbejdsmiljo Instituttet] at 23:33 22 July 2012

4

NIELSEN AND RANDALL

therefore reasons for the intervention are likely to influence the buy-in of key stakeholders (Fredslund & Strandgaard, 2005). It is therefore important to explore who defined the problem, who decided what should be done, and who should implement change. This means that it is also important to identify the key stakeholders in the initiation process: These may be managers, employees, union representatives, occupational health practitioners, and clients. This identification may help understand the reactions and actions of other key stakeholders (see later). Some of the effects of this decision-making process have been identified by Egan et al. (2007). They concluded that interventions initiated for performance reasons were found to have an adverse impact on employee health and well-being, whereas interventions whose rationale was to improve employee health and well-being was found to have a positive effect on these same outcome measures. Developing intervention activities: Did the intervention activities target the problems of the workplace? It has been argued by many researchers that the correct tailoring of an intervention to the needs of stakeholders requires a thorough risk assessment (LaMontagne et al., 2007; Nielsen, Randall, Holten, & Rial Gonza´lez, 2010; Noblet & LaMontagne, 2009). A thorough risk assessment is a crucial diagnostic process (Kompier et al., 1998) and provides information that can be used to check whether intervention activities addressed the problems perceived by organizational members. Context-independent organizational-level strategies have been described as unlikely to succeed as each organization is unique and therefore require unique solutions (Hurrell & Murphy, 1996). Thus, it is important during the PE to examine whether intervention activities were tailored to the problem as it is manifested itself in the specific organizational context. Tailoring of organizational-level interventions does not usually include adapting interventions to meet the requirements of specific individual employees. This has been cited as a potential problem with organizational interventions: An optimal strategy may be to use a combination of different interventions (LaMontagne et al., 2007). Individual-level activities implemented during the development of organizational-level interventions may prime participants to support and engage in organizational-level changes when they are implemented (Nielsen, Randall, Brenner, & Albertsen, 2009; Nielsen, Randall, & Christensen, 2010). Developing structured action plans may also facilitate effective intervention: Such plans describe intervention activities in terms of the resources needed, the activities undertaken, and how the intervention is implemented, including identifying

who is responsible and who were the targets of intervention (Nielsen, Randall, Holten, & Rial Gonza´lez, 2010). The contents of action plans also often highlight the potential ‘‘active ingredients’’ of the intervention that could be linked to intervention outcomes (Nielsen, Randall, & Christensen, 2010). Implementing intervention activities: Did the intervention reach the target group? Careful documentation of the actual implementation of intervention activities is a vital element of process evaluation (Semmer, 2003, 2006, 2011) because this will highlight any discrepancies between the planned intervention and its implementation (Roen et al., 2006). Important questions include: ‘‘Which aspects of the intervention activities brought about noticeable changes?’’ ‘‘How many changes were delivered to whom?’’ ‘‘Who noticed/reported these changes?’’ The potential active ingredients of the intervention also need to be reexamined and compared to the active ingredients identified in the intervention plan. This identification helps to rule out rival hypotheses for intervention results, i.e., that other factors than the intervention account for observed changes or lack of change (Lipsey, 1996). The importance of documenting differences between planned and actual exposure has been highlighted in a number of studies. Nielsen, Fredslund, Christensen, and Albertsen (2006) found that in a designated intervention group no changes in wellbeing were detected because intervention activities had not been implemented, whereas changes were observed in a designated control group because the manager had initiated activities intended only for the intervention groups. Similarly, Landsbergis and Vivona-Vaughan (1995) found that in an intervention department where no effects were found, a planned ‘‘policy and procedures’’ manual had not been completed and implemented.

Implementation strategy In this section we focus on the roles and behaviours of key stakeholders. Later in the section on mental models we focus on the appraisals and perceptions of key stakeholders and how these may drive key stakeholders’ behaviours, thereby indirectly influencing intervention outcomes. Drivers of change and the roles of key stakeholders: Who were/are the drivers of change? In complex interventions, there are often many stakeholders in the intervention process and therefore many potential drivers of change. In PE these stakeholders must be identified and their role in the change process explored. It is important to identify who has the power to make changes in order to identify how

PROCESS EVALUATION TOOL

Downloaded by [Arbejdsmiljo Instituttet] at 23:33 22 July 2012

much they were involved in intervention activities (Nytrø et al., 2000). Next we discuss in further detail the roles of key change agents. Participatory approaches—involving employees: Did employees participate significantly in decision making and how many were involved? The participatory approach has been advocated as a desirable intervention strategy and plays a major role in wellknown organizational occupational health intervention approaches (Nielsen, Randall, Holten, & Rial Gonza´lez, 2010). Participation in the development of health promoting activities is also included in the guidelines of the World Health Organization and the European Network for Workplace Health Promotion (European Network for Workplace Health Promotion, 2007). The essence of participation is a conscious and intended effort made by individuals at a higher level in an organization to provide visible extrarole or role-expanding opportunities and enhanced control for individuals or groups at a lower level in the organization (e.g. to have a greater voice). Participation can take different forms: it may be informal or delivered through formal changes in roles and responsibilities, directly experienced or indirectly through union representatives, and the breadth and depth of participation and the extent of influence linked to the participatory activities can also vary (Lines, 2004). Several pieces of research discuss how both qualitative (type of) and quantitative (amount of) participation might have influenced intervention outcomes. Nielsen et al. (2006) found that employees with little formal education benefited most from a directive type of participation where they were told what to do. Aust, Rugulies, Finken, and Jensen (2010) found that employees reacted negatively to only having influence over parts of the intervention programme, i.e., limited influence over the scope of the problem. Concerning the amount of participation, Lines (2004) found that this was negatively related to resistance of change, and positively related to achievement of goals and organizational commitment. Similarly, Nielsen, Randall, and Albertsen (2007) found that high levels of reported participation in change were associated with low levels of behavioural stress symptoms and higher job satisfaction after intervention. Eklo¨f, Ingelga˚rd, and Hagberg (2004) found that the degree of participation in the resolution of occupational health concerns was consistently associated with decreased work demands, increased social support, and lowered stress levels. Senior management support: What was the role of senior managers? Senior managers are often involved in the allocation of resources to

5

intervention projects (i.e., the resources needed to plan, implemen, and evaluate the project), they may act as role models through their attitudes to the intervention, and they may be actively involved in intervention activities (Giga, Noblet, Faragher, & Cooper, 2003; Lindstro¨m, 1995; Randall, Cox, & Griffiths., 2007). However, because of their seniority and own work demands they are rarely able to follow closely intervention development activity and implemen-tation (Nytrø et al., 2000). Although the importance of senior management support is often discussed, it is seldom formally evaluated (Nielsen, Randall, Holten, & Rial Gonza´lez, 2010; Semmer, 2011). What research there is suggests that such support can be important. In a study of stress coping training, Lindquist and Cooper (1999) found that when senior management released staff from their duties to participate in workshops, attendance was 100%. In contrast, at follow-up when staff had to participate during their leisure time, participation dropped to 66%. Saksvik, Nytrø, Dahl-Jørgensen, and Mikkelsen (2002) reported decreased opportunities for staff to take part in participatory workshops were due in part to constraints imposed by senior management only allowing employees time to participate in short workshops. In this example, the lack of support from senior management also had a ‘‘trickle down’’ effect on middle managers, who reported they did not support the intervention project as they were allocated no resources to implement initiatives. Middle managers: What was the role of middle managers? While senior managers often make the decision to implement the intervention it is usually middle managers that are subsequently responsible for communicating and implementing change (Guth & Macmillan, 1986). Therefore, middle managers play a crucial role in many organizational-level occupational health interventions (Nielsen & Randall, 2009; Randall et al., 2007). Kompier, Cooper, and Geurts (2000) found that in all 11 case studies they collected from across Europe middle managers were primarily responsible for stress prevention interventions. This puts middle managers in a position to hinder or facilitate the change. For example, Dahl-Jørgensen and Saksvik (2005) reported that middle managers resisted change by restricting the time spent on interventions by employees. Similarly, Nielsen and Randall (2009) found that where middle managers were perceived as supportive and took an active part in implementing change, employees reported better working conditions and higher levels of psychological wellbeing after the intervention. Middle managers can also bring about changes in intended intervention exposure patterns. For example, Nielsen et al. (2006) found that a new manager who resented being in the

6

NIELSEN AND RANDALL

Downloaded by [Arbejdsmiljo Instituttet] at 23:33 22 July 2012

control group initiated and implemented activities similar to those planned in the inter-vention groups with positive outcomes. In a process evaluation of seven intervention projects, Saksvik et al. (2002) found that middle managers had often exerted passive resistance that had damaged and diluted some intervention activities. Together these findings indicate that middle managers’ motivation for implementing change should be documented along with the actions they take to facilitate or obstruct change. This documentation is especially important because such data may not be captured in their performance appraisals and in times of pressure they may therefore choose to prioritize other aspects of their job over and above intervention activities (Saksvik et al., 2002). Consultants: What was the role of consultants? Large intervention projects often use external consultants to design, implement and facilitate aspects of the intervention process (Nielsen, Randall, Holten, & Rial Gonza´lez, 2010). Lindstro¨m (1995) reported that process consultants facilitated organizational changes by giving feedback on the progress of change and on group dynamics. In another study the consultants played a role in the proliferation of the intervention activities as they sold similar services to those implemented to other groups in the organization (Nielsen et al., 2006). However, few intervention studies have included an evaluation of the role or the competencies of the external consultant(s) (Semmer, 2006). In order to isolate intervention effectiveness it is important to evaluate whether the consultants had the necessary skills and abilities to enhance the intervention process by motivating and guiding participants through the intervention process (Landsbergis & VivonaVaughan, 1995). It may also be that when external consultants have total responsibility for change they leave no infrastructure within the organization for sustaining and continuing improvements they initiated, thus reducing long-term intervention effects (Dahl-Jørgensen & Saksvik, 2005). Nielsen, Cox, and Griffiths (2002) argued that, for intervention effects to be maintained in the long term, a shift must take place in which organizational members gradually take more responsibility for the intervention from the consultant. Information and communication about the intervention: What kind of information was provided to participants during the study? It has been shown that the level of information and communication plays an important role in the effects of interventions (Jimmieson, Terry, & Callan, 2004). Providing information about a change keeps employees up to date about anticipated events, the consequences of

change and changes in work roles linked to the intervention (Øyum, Kvernberg Andersen, Pettersen Buvik, Knutstad, & Skarholt, 2006). It has also been found that open communication helps employees to understand the intentions behind organizational-level occupational health interventions, thus improving employee commitment to and participation in the intervention (Nytrø et al., 2000). Communication is likely to influence employees’ sense making (e.g., their perception of the motives and objectives of the intervention) and this appears to be closely linked to their commitment to intervention activities (Weick, Sufcliffe, & Obstfeld, 2005). Therefore, it is important to examine what kind of information has been distributed, to whom, and how it has been received and perceived. This means that three important questions need to be answered. (1) Were participants informed about the project? Nielsen et al. (2007) found that receiving adequate information about an intervention project predicted the extent to which employees participated in intervention activities. One important caveat to this finding was that where information was not followed up by actual activities, employees were disappointed and reported negative results. (2) Were risk assessment results fed back? This feedback has been found to lead to more intervention activities (Eklo¨f, Hagberg, Toomingas, & Tornqvist, 2004). In a later study, Eklo¨f and Hagberg (2006) found the most significant changes in social support were observed in parts of the organization where supervisors, and to a lesser extent work groups, had received detailed information about the problems identified in the risk assessment. (3) To what extent are all participants updated about progress? Landsbergis and Vivona-Vaughan (1995) found that those employees not directly involved in intervention planning and implementation tended to be less aware of the progress of the intervention: These employees also reported that the intervention had little effect.

Context Field studies are conducted to enhance the ecological validity of intervention research. However, this validity can only be achieved if the influence of the social and organizational context on intervention outcomes is measured and analysed (Heaney et al., 1993; Rousseau & Fried, 2001). Context can be defined as ‘‘situational opportunities and constraints that affect the occurrence and meaning of organizational behaviour as well as functional relationships between variables’’ (Johns, 2006, p. 386). The context may either facilitate or hinder successful implementation. Intervention context can provide a link between intervention plans and intervention exposure (i.e., work as a mediator) or can dilute or strengthen the

PROCESS EVALUATION TOOL

Downloaded by [Arbejdsmiljo Instituttet] at 23:33 22 July 2012

effects of intervention activities (i.e., work as a moderator). The overall question is to ask: ‘‘Which hindering and facilitating factors in the context influenced intervention outcomes?’’ Because the context is diverse and multifaceted the concepts of omnibus and discrete context (Johns, 2006) provide a useful framework for PE. Omnibus context. This refers to the story told and prompts several process evaluation questions including: ‘‘Who are the participants in the intervention and who drives the intervention?’’ ‘‘Where does it take place?’’ ‘‘When did the intervention take place?’’ The underlying theme of these questions is: ‘‘How did the intervention fit in with the culture and conditions of the intervention group?’’ DahlJørgensen and Saksvik (2005) found that a context in which there were high job demands often hindered participation in interventions. Organizational culture may also play a role: Saksvik et al. (2002) found that a bureaucratic organizational structure or being part of a larger, international organization hindered the development of inter -vention activities. Furthermore, it is important to ask: ‘‘What capacity does the organization have to conduct interventions?’’ The preintervention healthiness of an organization and its past use of and experience with such interventions have been found to affect organizational occupational health intervention outcomes (Semmer, 2006). Workplaces with low demands, high levels of support, and low stress levels may have more time and resources to involve workers and managers in participation and integration of interventions. On the other hand, healthy organizations with low levels of stress and a good working environment may not need interventions (Taris et al., 2003). Indeed, ceiling effects may prevent further improvement in intervention outcomes, even if the theory behind intervention is correct (Nielsen et al., 2006). This represents an intervention paradox whereby the omnibus context can inhibit intervention where it is needed most. Discrete context. This aspect of context focuses on specific events that may have influenced the effects of the intervention. The question asked here is: ‘‘Which events took place during the intervention phase?’’ Some factors that have been identified here are new project management demands (Nielsen et al., 2006), conflicting priorities, concurrent use of multiple change programmes (Guastello, 1993), and lack of integration of the intervention with important corporate strategic decision-making activities (Schurman & Israel, 1995). Factors at both the intraorganizational (e.g., introducing conflicting initiatives; Nielsen et al., 2006; Randall et al., 2007)

7

and national level (e.g., economic recession; Landsbergis & Vivona-Vaughan, 1995; Mikkelsen & Saksvik, 1999; Nielsen, Randall, & Christensen, 2010) should be considered.

Mental models Recently, research has begun to examine how individuals’ perceptions and appraisals of an organizational-level occupational health intervention are linked to outcomes through how they drive the behaviours of key stakeholders. Employees, managers and other key stakeholders may have diverse and potentially conflicting agendas that may influence how they behave and react to the intervention. These underlying psychological processes may help to explain change outcomes (Nytrø et al., 2000) but have rarely been measured directly in intervention research. The main question to ask here is: ‘‘What is the role of participants’ mental models in determining their response to the intervention?’’ Mental models are used to make sense of the world and explicit efforts at sense making take place when the world is perceived to be different from the expected state of the world, e.g., when changes at work are occurring (Weick et al., 2005). Translating this into an intervention context, mental models determine how participants react to the intervention and its activities and help explain the behaviours of key stakeholders throughout the intervention project. For example, it has been found that different stakeholders have conflicting mental models about what constitutes success (Cole et al., 2003; Shannon & Cole, 2004). Detecting these different perspectives may help to explain how different motivations drive differences in key stakeholders’ behaviours during the intervention process. Saksvik et al. (2002) found that managers preferred individual-level interventions, i.e., putting responsibility for change at the individual. In contrast, employees held negative attitudes to these interventions because they felt that this strategy was a way for managers to escape responsibility thus failing to address the problems in the workplace. For interventions to be effective, it has been argued that employees should perceive that they have problems that need to be addressed, believe that the intervention will be effective in addressing those problems, and be motivated to actively support the intervention by participating in intervention activities (Nytrø et al., 2000). This implies that an important PE question is: ‘‘To what extent are participants ready for change?’’ Readiness for change has been widely researched, but rarely within the organizational-level occupational health intervention literature. There is an abundance of literature that has

Downloaded by [Arbejdsmiljo Instituttet] at 23:33 22 July 2012

8

NIELSEN AND RANDALL

linked the degree to which employees welcome and actively support the implementation of change to organizational development and change outcomes (Weiner, Amick, & Lee, 2008). A number of studies have discussed the importance of mental models of readiness for change in organizational-level occupational health intervention research. Randall, Griffiths, and Cox (2005) found that a change in responsibility had not been communicated to staff because managers felt the intervention would have a detrimental outcome on their own working conditions: They were not ready for the change because their appraisal was that the intervention would damage their own working conditions. Another mental model concerning readiness of change is initiative fatigue: If organizational-level occupational health interventions have previously been conducted but little learning has taken place, this failure may have a detrimental impact on participants’ perceptions of later initiatives and their willingness to participate in intervention activities (Nytrø et al., 2000; Saksvik et al., 2002). Over time, people who work closely together may develop similar models to interpret and react to the world (Mathieu, Heffner, Goodwin, Salas, & Cannon-Bowers, 2000). In an intervention context participants with shared mental models may perceive the intervention and its activities in a similar manner and as a result react in similar ways to the introduced intervention. Where shared mental models have not been developed individuals may have conflicting agendas that may hinder effective implementation and lead to diversity in intervention experiences and outcomes. Recent research has shown that a lack of support for an intervention programme was the result of differences in stakeholder views about the most effective intervention option: Consultants felt the focus of the intervention programme should be leadership development, whereas employees thought it should have focused on employee involvement (Aust et al., 2010).Therefore, process evaluation should be used to examine the mental models and the degree to which mental models are shared by participants using the question: ‘‘To what degree do participants have shared mental models?’’ An equally important mental model to explore is: ‘‘How did participants perceive the intervention and its activities?’’ Employees’ perceptions of the drivers of change and the intervention objectives are likely to influence their willingness to participate in intervention activities. If they believe activities do not address the problems raised or are of a poor quality this may reduce their engagement. A study on the implementation of a Stress Risk Assessment tool revealed that middle managers (who were responsible for using the tool) failed to use the tool after they attended a training course because

they felt there was no need for risk assessment since their perception was that stress was not a problem (Biron, Gatrell, & Cooper, 2010). In a study of organizational improvement programme and an individual-level stress management programme, Bunce and West (1996) found that the perceived smoothness of implementation of the training programme and the depth of programme content were related to lower levels of stress and higher levels of job satisfaction after the programme. Nielsen et al. (2007) found that individuals’ appraisal of the quality and sustainability of intervention activities were positively linked to postintervention well-being. In participatory action research projects where employee representatives are involved in making change, it is particularly important to examine the mental models of those not directly involved in decision making but who were targeted by the intervention (Landsbergis & Vivona-Vaughan, 1995). In some instances, perceptions of key stakeholders’ expertise may also be influential. Nielsen et al. (2006) reported that employees with little education and little experience with dealing with occupational health issues appraised an external occupational health practitioner more positively because of her directive approach and a focus on individual issues. Changes in mental models of the job. In order for real changes to happen as a result of organizationallevel occupational health interventions, it has been pointed out that participants and key stakeholders must unlearn old mental maps of their working conditions and learn new ones (Schurman & Israel, 1995). An important driver of this change is the degree to which intervention activities prompt a shift from espoused theories about the intervention to theories-in-use (Argyris, 1976, 1995). Theories-in-use are the mental models that guide our behaviour, whereas espoused theories are the attitudes and beliefs that we tell others guide our behaviour. According to Argyris (1976, 1995) real change only happens when individuals change their theories-inuse. Therefore, an important part of process evaluation should be the measurement of change in employees’ knowledge of the intervention, their expectations that the intervention can bring about changes, and that these changes can have an impact and be sustained as part of continuous improvements at work (Schurman & Israel, 1995). The important question here is: ‘‘Did the intervention bring about a change in participants’ mental models?’’ In a randomized, controlled study, Nielsen, Randall, and Christensen (2010) found that team manager training that produced changes in mental models was need to bring about led to changes in managers’ behaviours and, as a consequence, their subordinates’ involvement and job satisfaction.

PROCESS EVALUATION TOOL

Downloaded by [Arbejdsmiljo Instituttet] at 23:33 22 July 2012

PE methods In the previous sections, we have described a set of topics which we argue should be considered when determining the validity and generalizability of organizational-level occupational health interventions. This leads to an important question: ‘‘How do we get this information?’’ We argue that process evaluation calls for a mixed methods approach. Detecting the different active ingredients of organizational-level occupational health interventions requires the use of a range of different methods. The measurement of some active ingredients requires the collection of observer or objective data, e.g., what can be seen to be happening. Other PE constructs are appraisals (e.g., how participants appraised the intervention influences intervention outcomes) that cannot be directly observed. In addition, the wide range of PE constructs and the importance of context indicate that a flexible approach to data collection and analysis offered by qualitative methods is likely to have utility. Combining both qualitative and quantitative results through mixed methods may offer four important benefits: Identification of the mechanisms behind any changes brought about by the intervention; added meaning to the results of outcome evaluation; cross-validated and triangulated results; and identification of the impact of the intervention context on the processes and outcomes of change (Greene, Benjamin, & Goodyear, 2007; Hugentobler, Israel, & Schurman, 1992; Johnson, Onwuegbuzie, & Turner, 2007; Nastasi et al., 2007). Unfortunately, research on interventions rarely includes interview schedules, observation strategies, or detailed information about how these data were collected and used. It is therefore difficult to refer to specific examples that provide guidance on how this may be done. Therefore, in the following sections we describe the data that may be collected and provide preliminary estimates of the suitability of different data collection methods (see Table 1).

Documentation and appraisal of the intervention design and implementation Initiation. In the first step it is important to document the precursors of intervention activities. This provides some documentary evidence of the process. Such information can be obtained through sources such as meeting minutes and other organizational material that records interventionrelated events. These may reveal who took the initiative to initiate a project, how the intervention objectives were formulated, and who participated in making the decision to design the project. Interviews may be better suited to the exploration of mental models (e.g., it can be revealed whether any

9

stakeholders had a hidden agenda such as employees wanting to get rid of an unpopular manager or unpopular colleagues, managers wanting to be seen to be ‘‘doing something’’, or internal consultants creating work for themselves, justifying their existence). Intervention activities. The risk assessment method itself and feedback reports provide information on the ‘‘objective’’ part of risk assessment. This element of PE should include the collection of stakeholders’ experiences of the feedback of risk assessment results. Through questionnaires at follow-up, employees can be asked whether they are aware of the results of risk assessment, have participated in meetings feeding back results, or whether the results have been discussed with managers and colleagues. It may also be examined whether participants felt the risk assessment tool examined the appropriate topics. Appraisals of the risk assessment may be collected through obtaining minutes from feedback meetings, direct observation of feedback activities, and inspection of feedback documents. Such data can help determine how the risk assessment method and content was determined upon, and which strategies were employed to feed results back to employees. Interviews provide an important source of data to gain information about the impact of risk assessments: Have the results been used and how? Independent reviews of documented actions plans provide a method of obtaining information about the content and objectives of action plans. It is possible to detect the level of detail, e.g., do action plans include vague objectives such as: ‘‘We want to be better at communicating’’ or do they include detailed concrete descriptions of the objective and associated activities, deadlines, required resources needed, and allocate responsibilities for key tasks? Comparisons can also be made between the results of risk assessment and how these fed into action plans. Cognitive mapping techniques may be useful in making the connection between problems present in the organization and planned intervention activities (Harris, Daniels, & Briner, 2002): This will help to determine if there is a clear link between the results of the risk assessment and the activities outlined in actions plans. Through observations of action planning meetings and interviews with those involved, it is possible to explore the translation of risk assessment results into action plans. These data collection methods should address questions about how and why activities were prioritized and by whom. It is also possible to document the degree to which actions plans were implemented: This may be obtained through questionnaire data collected from various stakeholders. Through listing the activities planned in question-

Did the intervention activities target the problems of the workplace? . What means of risk assessment was used? . What was the implementation strategy? . How many changes were planned? . Was the focus on one large change or rather many small steps to make a change? . Were actions plans sufficiently detailed? . To which extent were activities tailored to the organization? . To which extent did the activities target multiple levels? . How was risk assessment translated into activities? Did the intervention reach the target group? . Why were intervention activities not implemented? . Which aspects of the activities brought about changes? . How much were delivered to whom? . Who noticed changes? . Were positive and negative ‘‘side effects’’ monitored? . Was a ‘‘plan B’’ developed to address any lack of progress? Who were the drivers of change? . Did the roles and commitment of key stakeholders change over time? . Did employees participate in real decision making and how many were involved? . Did employees assume responsibility for the project and the completion of activities? . What was the role of formal representatives? . What was the role of senior managers?  Were the necessary resources allocated?  Did they support the project throughout – and how was support manifested? . What was the role of middle managers?  Did they support the project throughout – and how was support manifested?  Did they function as the link between senior management and employees?  Did they encourage active participation by employees? . What was the role of consultants?  Did they create a supportive, trusting atmosphere?  Did they use tools to facilitate the process, e.g. visualizing progress? And how did they work?  Did the organizational consultant enable organizational ownership?

The intervention Who initiated the intervention and for what purpose?

Questions

Interviews: Identification of drivers of change throughout all phases of the project Key stakeholders role of involvement

Questionnaires: The extent of involvement

(continued overleaf )

Interviews: Degree of intervention activity implementation

Interviews: Appropriateness of intervention tools including workshop tools and questionnaires Questionnaires, Appropriateness of intervention activities

Interviews: Detection of hidden agendas

Additional data ‘‘appraisal’’

Questionnaires, meeting minutes Actual implementation

Organizational data: Who took the initiative? What was the official objective? Comparison of risk assessment to action plans Documentation of activities developed Review of workshop material

Necessary data ‘‘objective’’

TABLE 1 Process evaluation checklist

Downloaded by [Arbejdsmiljo Instituttet] at 23:33 22 July 2012

10 NIELSEN AND RANDALL

Interviews: Appropriateness and quality of information

Interviews: Identification of hindering and facilitating factors in the omnibus and discrete context Degree of intervention activity implementation

Interviews and observations: Changes in employees’ perceptions of themselves, their colleagues and their workplace post-intervention Degree of intervention activity implementation

Organizational data, news media What happened in the discrete context?

Questionnaires: Degree of intervention openness to change Quality, quantity, and appropriateness of intervention activities

Additional data ‘‘appraisal’’

Organizational data: Review of meeting minutes, memos, correspondence, posters, leaflets

Necessary data ‘‘objective’’

*The question ‘‘Why were intervention activities not implemented?’’ can be asked at all three levels, but the answers are different for each level. At the intervention level, the answer will reside in the actions of key stakeholders, whereas at the context and mental models level, we may obtain information about the reasons behind these behaviours.

What kind of information was provided to participants during the study? Were participants informed about the project? . Were risk assessment results fed back? . To which extent are all participants updated about progress? . Were small successes celebrated? The context Which hindering and facilitating factors in the context influenced intervention outcomes? . Why were intervention activities not implemented? . How did the intervention fit with the culture and conditions of the intervention group?(omnibus context) . What capacity did the organization have to conduct interventions? (omnibus context) . Which events took place during the intervention phase?(discrete context) . Did a change in management take place? . Did organizational restructuring take place during the intervention phase? . Did changes outside the organization take place during the intervention phase that may have influenced intervention outcomes, e.g., changes in legislation, recession etc.? Participants’ mental models What is the role of participants’ mental models? . To which extent are participants ready for change? . To which degree do participants have shared mental models?  In case of resistance, what were the threats appraised by key stakeholders?  In case of divergence, how did mental models differ? . How did participants perceive the intervention and its activities? . Did the intervention bring about a change in participants’ mental models? . Why were intervention activities not implemented?

Questions

TABLE 1 (Continued )

Downloaded by [Arbejdsmiljo Instituttet] at 23:33 22 July 2012

PROCESS EVALUATION TOOL

11

Downloaded by [Arbejdsmiljo Instituttet] at 23:33 22 July 2012

12

NIELSEN AND RANDALL

naires, and asking respondents to indicate which they have participated in, or which procedures have been changed, it is possible to document the reach of activities (see Nielsen et al., 2006). Through interviews we may gather data on why activities may not have been implemented: Questions can be asked about whether the necessary resources were available and if not why participants perceived this to be the case (e.g., was it because senior managers were reluctant to allocate resources or did no-one assume responsibility for implementing actions?). The drivers of change may be identified through interviews, meeting minutes, and questionnaires. Data may be collected by asking employees about the extent to which they were involved in developing and implementing change (see Randall, Nielsen, & Tvedt, 2009). A recent criticism of action research approaches is that often only few employees are involved through steering groups or health circles (Nielsen, Randall, Holten, & Rial Gonza´lez, 2010). It has been argued that only through involving all employees the advantages of participation can be achieved (Hurrell, 2005): Therefore, information about participation should be collected from as many of the intended beneficiaries of the intervention as possible. While interviews provide rich information about how employees were involved and the perceptions of such involvement these do not provide information about how many had the opportunity to influence the intervention process. A questionnaire has now been developed which can be used to examine the degree to which all employees were involved in the intervention process (Randall et al., 2009). Information about senior and middle management support can also be obtained through interviews and questionnaires in a similar manner (e.g., Nielsen & Randall, 2009). Through observing activities that are designed to enhance participation it is possible to collect data on, for example, how many employees were invited to attend the activity, how many did actually participate, how many participants were active during the meetings, and what role middle managers, employees, senior managers, and, if applicable, consultants took during the activities. The following questions could be addressed using these data: Did all organization members make suggestions for planning and implementation the intervention and its activities? How was the general climate between different key stakeholders—did employees have an actual say or were decisions made by the management levels? Did any stakeholders show resistance? How were the meetings run, e.g., were tools used to facilitate discussions?

Communication and information Organizational material and archive data such as posters, meeting notes, mail communication, leaflets,

memos, and correspondence provide ‘‘objective data’’ on how the intervention was communicated. However, this organizational data should be supplemented by interviews and questionnaires to ensure that this information actually reached participants and how it was perceived by recipients (either by interview or questionnaire methods). As has already been discussed the intentions driving the intervention may not match the way these are perceived by its recipients. A question for intervention recipients should therefore be: Was information sufficient and was it easy to understand and relate to daily work? If middle managers were given leaflets did they distribute them? Did participants read emails about the project, updates in personnel magazines, or information sent to them? It is also important to document the reach of information about different parts of the intervention process. Are employees informed about the initiation of the project but then hear nothing about the results of risk assessment, the developed action plans, and the implemented changes? If this is the case they are likely to develop a perception that the project was unsuccessful. Even if improvements are being implemented they fail to see the link between the intervention project and improvements if these are not clearly communicated.

Documenting the importance of context Qualitative methods may be best suited to unpredictable, complex, and difficult settings as a means of mapping critical events that may influence outcomes (Yin, 1994). Qualitative studies may be suitable for examining contextual levels that may affect the intervention outcomes (the employment of a manager that supports change) and explore the full range of behaviours and attitudes that the context might affect thereby working backwards to make inferences about the situation. Data may be obtained from meeting notes, interviews, and organizational material. Data may also be collected on any concurrent events such as changes in management and/or ongoing restructuring. Through news media, information may be obtained about the national context, e.g., changes in legislation and media attention (e.g., in some countries the teaching profession receives negative media attention on a regular basis). This material, however, should always be analysed in terms of how it relates to the intervention itself. Further information about the impact of such factors may be obtained through interviews, meeting minutes, and other organizational material. Also at this level, the question ‘‘Why were activities not implemented?’’ could be asked but with additional probe questions about answers that point to the impact of context: It may be that employees

PROCESS EVALUATION TOOL

with substantive contact with clients found it hard to prioritize participating in activities that took away time from clients or that concurrent restructuring took up all the time and energy of managers and employees. Such information may be best obtained through interviews.

Downloaded by [Arbejdsmiljo Instituttet] at 23:33 22 July 2012

Evaluating the impact of mental models Some aspects of mental models such as readiness for change can be explored through questionnaires (Weiner et al., 2008). However, changes in multifaceted and complex mental models and the diversity of mental models may be more difficult to detect using quantitative methods. In order to understand the impact of an intervention we must also gain an understanding of how the intervention was perceived by recipients. Through interviews and reviews of meeting minutes we can obtain information about the extent to which participants felt intervention activities addressed the issues raised in the risk assessment. The question ‘‘Why were intervention activities not implemented?’’ may also be answered at this level. The answer to this question resides within how participants perceived the intervention activities. This involves asking questions such as: Were key stakeholders not supportive—and if so what were their mental models of why the intervention presented a threat? Were activities not perceived to be appropriate? How was the occupational health consultant received—did he or she show an understanding of the culture and problems of the organization? All these questions may help explain the actions of key stakeholders, e.g., a middle manager who initially supported the project stopped doing so, once s/he realized that it may threaten her/his status within the organization. Interviews may be particularly important to reveal the mental models of participants in terms of how they evaluate the appropriateness and quality of interventions. However, to get a broader but less detailed picture from the wider population, questionnaires may be used (cf. Nielsen et al., 2007, 2009). Focus groups may be a particular powerful method for revealing the extent to which shared mental models have developed (Mohammed, Ferzandi, & Hamilton, 2010). Changes in mental models of the job. To explore double-loop learning questionnaires can be used to explore the degree to which different procedures have been introduced to ensure that decisions and behaviours are not taken for granted but openly explored (Randall et al., 2009). In addition observations and interventions may help reveal changes in procedures, e.g., Nielsen, Randall, and Christensen (2010) found that leaders and employees changed behaviours after a teamwork implementation in that

13

leaders became aware of exerting transformational leadership and employees started taking over responsibilities such as rota planning and independent problem solving. These data can be collected by asking employees whether there have been any changes in behaviours and work procedures during the intervention project and whether these are seen to be ascribed to the intervention.

Timings of data collection It has been argued that process evaluation questions should be included at follow-up (Randall et al., 2009), but it is also important to gather data throughout the process. Ongoing data collection is important for two reasons. First, collecting data on the process after the intervention project may result on retrospective sensemaking (Weick et al., 2005). Participants may seek explanations of why or why not the intervention programme had a desired effect, e.g., that middle managers did not fulfil their role as change agents or concurrent changes created conflicts. Second, elements of the process may change over time. For example, a project may be initiated for one purpose, e.g., the legal requirement to conduct risk assessments but may be sustained because participants see the added value of the project. The role of key stakeholders may also change over time. For example, senior managers may be supportive of the project at the beginning, but focus their attention on new projects over time or lose interest if the project does not bring about changes at the speed expected. Participation may also change over time. One of the core elements of participatory processes is empowerment: Over time employees develop the skills and start feeling responsible for creating a good working environment. It is important to document any such changes, e.g., do they emerge during risk assessment, during action planning, or during implementation of activities? Concurrent data collection throughout the project is time consuming for researchers and participants alike. This means that measures need to be developed that are both easy to use and analyse. Although observation of meetings and of work and interviews are time consuming, reviewing meeting minutes and developing short questionnaires may be less so. The scales by Randall et al. (2009) are fairly lengthy; however, it is also possible to construct short questionnaires and interviews schedules (e.g., using telephone interviews) that can be used to collect data from key stakeholders (e.g., the union representative and middle managers) on a monthly basis. Such data collection need not take much more than a few minutes. Another possibility is to develop a short list of questions that managers and employees discuss at regular meetings in the organization. Such a strategy

14

NIELSEN AND RANDALL

Downloaded by [Arbejdsmiljo Instituttet] at 23:33 22 July 2012

also ensures that the project stays on the agenda in the organization and that organization members reflect on progress. Collecting short measures may not only be an advantage to the researchers, it may also be used as formative evaluation for the organization as it provides the opportunity to reveal gaps in the current implementation process and take corrective action (Patton, 2002). Using the Experience Sampling Method offers the opportunity to measure daily experiences (Hektner, Schmidt, & Csikszentmihalyi, 2007) and to examine how experiences vary on a daily basis, e.g., levels of participation and middle managers’ actions. In this way PE may provide managers with dashboard data that can be used to inform decisions about changes to the intervention process.

Analysing process evaluation data Research including systematic process evaluation is still in its infancy; however, there are a few studies which can be drawn upon when seeking inspiration on how to analyse process evaluation data. It may be considered optimal to integrate process evaluation data into effect evaluation to fully understand how processes have influenced intervention outcomes. To test the effects of actual exposure, Randall et al. (2005) divided intervention participants into two groups: those who had been exposed to the intervention and those who had not. They found clear differences in exhaustion levels, concluding that those whom the intervention had reached reported better health postintervention. More sophisticated analyses may be conducted of the patterns of interaction between process and outcomes. Using Structural Equation Modelling, Nielsen et al. (2007) combined ‘‘objective’’ process data (information received about the intervention programme, and participation in intervention activities) and ‘‘appraisal’’ process data (quality of intervention activities) and linked these to intervention outcomes. Another study by Nielsen and Randall (2009) linked preexisting working conditions (as a proxy for organizational maturity) to the role of the middle manager during the intervention and found this role predicted intervention outcomes. There may be instances where process data are not easily integrated into effect analysis, e.g., when describing the impact of context. Aust et al. (2010), Biron et al. (2010), and Nielsen et al. (2006) all present examples of how quantitative effect data and qualitative process data may be combined in such circumstances. In a study of middle manager training, Nielsen, Randall, and Christensen (2010) outlined four levels at which quantitative and qualitative data were integrated in order to detect effects at four levels (satisfaction with the intervention, changes in values and attitudes, changes in middle managers’ own behaviours, and

changes in subordinates’ working conditions and wellbeing. A limitation of all of these studies is that they all focus on isolated specific elements of the process and do not contain comprehensive process evaluation strategies like the one described in this article. Some examples of more comprehensive approaches to analysing process data have been reported. Murta et al. (2007) listed in their review a number of factors that may serve as an analytic framework. Analyses could be conducted where recruitment, context, reach (attendance rates), dose delivered (intervention content), dose received (the extent to which participants apply activities), participants’ attitudes, fidelity (whether activities were implemented as planned), and the link between process and outcome should be analysed in a stepwise manner to detect at which level effects can be identified. Randall et al. (2007) provided a template for analysing process data. They suggested analysing processes and outcomes of intervention at three levels: (1) microprocesses that concern the intervention implementation: magnitude, valance, effect on working conditions of participants and others; (2) macroprocesses describing the design, delivery, and maintenance of interventions; and (3) intervention context analysing the importance of the context at different levels from the context of the intervention, the department, the organization, the demands of the sector, and the national level.

CONCLUDING REMARKS In this article we presented a framework for how the processes of interventions may be evaluated. It does not present a complete model to cover all interventions; rather, it is intended to provide a guiding framework from which elements may be selected for inclusion in evaluations of organizational-level occupational health interventions. There is a conflicting demand on evaluation research: It should be cost effective, ecologically valid, and practically important, while at the same time be of scientific importance and generalizable (Bussing & Glaser, 1999). We believe a model that prompts careful consideration of the information that is necessary to evaluate the effectiveness of the intervention and under which circumstances the results may be transferable to other contexts will help to develop our understanding of organizational-level occupational health interventions. It may be worthwhile changing our opinion of what represents success in organizational-level occupational health interventions. In line with Lipsey (1996), we propose three indicators for success: (1) changes in working conditions, employee health, and well-being; (2) intermediate indicators that focus on the intervention activities implemented, whether

Downloaded by [Arbejdsmiljo Instituttet] at 23:33 22 July 2012

PROCESS EVALUATION TOOL

participants were aware of the intervention and its activities, whether participants held positive attitudes towards the intervention, whether participants developed new skills, and whether the intervention changed participants’ behaviour; and, finally, (3) indirect indicators that focus on whether the organization changes their strategies and tools to deal with occupational health issues. PE should be carried out in both intervention and control groups to get an understanding of changes in both. It is often the case that control groups and intervention groups interact with each other and this should be considered in PE (Nielsen et al., 2006). Problems during the process may be lack of perfect blinding, and crossover between groups, and it is important to understand any interaction between the two (Hurrell & Murphy, 1996). The focus is on process evaluation in this article, but we do not mean to say that effect evaluation is not important; rather, these should supplement each other (Nielsen, Taris, & Cox, 2010). Others have discussed what should be assessed in effect evaluation, appropriate follow-up times and the amount of follow-ups necessary (Kompier et al., 1998, 2000; Mikkelsen, 2005; Pettigrew, 1990; Taris & Kompier, 2003): It is beyond the scope of this article to discuss these. Although we discussed which methods may be used to collect data at the three levels, there is yet a long way to determine which methods may best be used to collect information at certain levels. One of the problems mentioned in the introduction is the lack of integration between process evaluation and effect evaluation. Few studies have integrated quantitative process evaluation and effect evaluation (e.g., Bond, Flaxman, & Bunce, 2008; Nielsen et al., 2007, 2009) but this may be a viable way forward. Finally, we would like to emphasize that we are not against quasiexperimental designs; rather, than these cannot stand alone in a simple effect evaluation. Quasi-experimental designs may control for biases that process evaluation cannot do on their own; instead, we should try to integrate process evaluation measures in the strongest designs possible.

REFERENCES Argyris, C. (1976). Theories of action that inhibit individual learning. The American Psychologist, 31, 638–654. Argyris, C. (1995). Action science and organizational learning. Journal of Managerial Psychology, 10, 20–26. Armenakis, A., & Bedeian, A. (1999). Organizational change: A review of theory and research in the 1990s. Journal of Management, 25, 293–315. Armstrong, R., Waters, E., Moore, L., Riggs, E., Cuervo, L. G., Lumbiganon, P., et al. (2008). Improving the reporting of public health intervention research: Advancing TREND and CONSORT. Journal of Public Health, 19, 1–7.

15

Aust, B., Rugulies, R., Finken, A., & Jensen, C. (2010). When workplace interventions lead to negative effects: Learning from failures. Scandinavian Journal of Public Health, 38, 106–119. Bambra, C., Egan, M., Thomas, S., Petticrew, M., & Whitehead, M. (2007). The psychosocial and health effects of workplace reorganisation. 2. A systemic review of task restructuring interventions. Journal of Epidemiology and Community Health, 61, 1028–1037. Biron, C., Gatrell, K., & Cooper, C. (2010) Autopsy of a failure: Evaluating process and contextual issues in an organizationallevel work stress intervention. International Journal of Stress Management, 17, 135–158. Bond, F. W., Flaxman, P. E., & Bunce, D. (2008). The influence of psychological flexibility on work redesign: Mediated moderation of a work reorganization intervention. Journal of Applied Psychology, 93, 645–654. Briner, R., & Reynolds, S. (1999). The costs, benefits, and limitations of organizational level stress interventions. Journal of Organizational Behavior, 20, 647–664. Bunce, D., & West, M. (1996). Stress management and innovation interventions. Human Relations, 49, 209–232. Burke, W. W., & Litwin, G. H. (1992). A causal model of organizational performance and change. Journal of Management, 18, 523–545. Bussing, A., & Glaser, J. (1999). Work stressors in nursing in the course of redesign: Implications for burnout and interactional stress. European Journal of Work and Organizational Psychology, 8, 401–426. Cole, D. C., Wells, R. P., Frazer, M. B., Kerr, M. S., Neumann, W. P., Laing, A. C., & The Ergonomic Intervention Evaluation Research Group. (2003). Methodological issues in evaluating workplace interventions to reduce work-related musculoskeletal disorders through mechanical exposure reduction. Scandinavian Journal of Work and Environmental Health, 29, 396–405. Cooper, C., Dewe, P., & O’Driscoll, M. (2001). Organizational interventions. In C. Cooper, P. Dewe, & M. O’Driscoll (Eds.), Organizational stress: A review and critique of theory, research, and applications. (pp. 187–232). Thousand Oaks, CA: Sage. Cummings, T. G., & Worley, C. G. (2009). Organization development and change. Mason, OH: Cengage Learning. Dahl-Jørgensen, C., & Saksvik, P. Ø. (2005). The impact of two organizational interventions on the health of service sector workers. International Journal of Health Services, 35, 529–549. Egan, M., Bambra, C., Petticrew, M., & Whitehead, M. (2009). Reviewing evidence on complex social interventions: Appraising implementation in systemic reviews of the health effects of organisational-level workplace interventions. Journal of Epidemiology and Community Health, 63, 4–11. Egan, M., Bambra, C., Thomas, S., Petticrew, M., Whitehead, M., & Thomson, H. (2007). The psychosocial and health effects of workplace reorganisation. 2. A systematic review of task of organisational-level interventions that aim to increase employee control. Journal of Epidemiology and Community Health, 61, 945–954. Eklo¨f, M., & Hagberg, M. (2006). Are simple feedback interventions involving workplace data associated with better working environment and health? A cluster randomized controlled study among Swedish VDU workers. Applied Ergonomics, 32, 201–210. Eklo¨f, M., Hagberg, M., Toomingas, A., & Tornqvist, E. W. (2004). Feedback of workplace data to individual workers, work groups and supervisors as a way to stimulate working environment activity: A cluster randomized controlled study. International Archives of Occupational and Environmental Health, 77, 505–514. Eklo¨f, M., Ingelga˚rd, A., & Hagberg, M. (2004). Is participative ergonomics associated with better working environment and health? A study among Swedish white-collar VDU users. International Journal of Industrial Ergonomics, 34, 355–366.

Downloaded by [Arbejdsmiljo Instituttet] at 23:33 22 July 2012

16

NIELSEN AND RANDALL

EU-OSHA (European Agency for Safety and Health at Work). (2010). European Survey of Enterprises on New and Emerging Risks, 2010. Retrieved from www.esener.eu European Network for Workplace Health Promotion. (2007). Luxembourg Declaration on Workplace Health Promotion in the European Union. Retrieved from http://www.enwhp.org/file admin/rs-dokumente/dateien/Luxembourg_Declaration.pdf Fredslund, H., & Strandgaard, J. (2005). Methods for process evaluation of work environment interventions. In J. Houdmont & S. McIntyre (Eds.), Occupational health psychology: Key papers of the European Academy of Occupational Health Psychology (pp. 109–117). Oporto, Portugal: ISMAI Publishers. Giga, S., Noblet, A. L., Faragher, B., & Cooper, C. L. (2003). The UK perspective: A review of research on organisational stress management interventions. The Australian Psychologist, 38, 156–164. Goldenhar, L., LaMontagne, A., Katz, T., Heaney, C., & Landsbergis, P. (2001). The intervention research process in occupational safety and health: An overview from the National Occupational Research Agenda Intervention Effectiveness Research Team. Journal of Occupational and Environmental Medicine, 43, 616–622. Greene, J. C., Benjamin, L., & Goodyear, L. (2007). The merits of mixed methods in evaluation. Evaluation, 7, 25–44. Guastello, S. (1993). Do we really know how well our occupational accident prevention programs work? Safety Science, 16, 445–463. Guth, W. D., & Macmillan, I. C. (1986). Strategy implementation versus middle manager self-interest. Strategic Management Journal, 7, 313–327. Harachi, T., Abbott, R., Catalan, R., Haggerty, K., & Fleming, C. (1999). Opening the black box: Using process evaluation measures to assess implementation and theory building. American Journal of Community Psychology, 27, 711–731. Harris, C., Daniels, K., & Briner, R.B. (2002).Using cognitive mapping for psychosocial risk assessment. Risk Management: An International Journal, 4, 7–21. Heaney, C., Israel, B., Schurman, S., Baker, E., House, J., & Hugentobler, M. (1993). Industrial relations, worksite stress reduction, and employee well-being: A participatory action research investigation. Journal of Organizational Behavior, 14, 495–510. Hektner, J., Schmidt, J. A., & Csikszentmihalyi, M. (2007). Experience sampling method: Measuring the quality of everyday life. Thousand Oaks, CA: Sage. Hugentobler, M., Israel, B., & Schurman, S. (1992). An action research approach to workplace health: Integrating methods. Health Education Quarterly, 19, 55–76. Hurrell, J. (2005). Organizational stress intervention. In J. Barling, E. K. Kelloway, & M. R. Frone (Eds.), Handbook of work stress (pp. 623–645). Thousand Oaks, CA: Sage. Hurrell, J., & Murphy, L. (1996). Occupational stress interventions. American Journal of Industrial Medicine, 29, 338–341. Jimmieson, N. L., Terry, D. J., & Callan, V. J. (2004). A longitudinal study of employee adaptation to organizational change: The role of change-related information and changerelated self-efficacy. Journal of Occupational Health Psychology, 9, 11–27. Johns, G. (2006). The essential impact of context on organizational behavior. Academy of Management Review, 31, 386–408. Johnson, R. B., Onwuegbuzie, A. J., & Turner, L. A. (2007). Toward a definition of mixed methods research. Journal of Mixed Methods Research, 1, 112–133. Kompier, M., Cooper, C., & Geurts, S. (2000). A multiple case study approach to work stress prevention in Europe. European Journal of Work and Organizational Psychology, 9, 371–400. Kompier, M., Geurts, S., Grundemann, R., Vink, P., & Smulders, P. (1998). Cases in stress prevention: The success of a participative and stepwise approach. Stress Medicine, 14, 155–168.

LaMontagne, A. D., Keegel, T., Louie, A. M., Ostry, A., & Landsbergis, P. A. (2007). A systematic review of the jobstress intervention evaluation literature, 1990–2005. International Journal of Occupational and Environmental Medicine, 13, 268– 280. Landsbergis, P., & Vivona-Vaughan, E. (1995). Evaluation of an occupational stress intervention in a public agency. Journal of Organizational Behavior, 16, 29–48. Lindquist, T. L., & Cooper, C. L. (1999). Using life style and coping to reduce job stress and improve health in ‘‘at risk’’ office workers. Stress Medicine, 143–152. Lindstro¨m, K. (1995). Finnish research in organizational development and job redesign. In L. Murphy, J. Hurrell, S. Sauter, & G. Keita (Eds.), Job stress interventions (pp. 283–293). Washington, DC: American Psychological Association. Lines, R. (2004). Influence of participation in strategy change: resistance, organizational commitment and change goal achievement. Journal of Change Management, 4, 193–215. Lipsey, M. (1996). Key issues in intervention research: A program evaluation perspective. American Journal of Industrial Medicine, 29, 298–302. Lipsey, M., & Cordray, D. (2000). Evaluation methods for social intervention. Annual Review of Psychology, 51, 345–375. Mathieu, J. E., Heffner, T. S., Goodwin, G. F., Salas, E., & Cannon-Bowers, J. A. (2000). The influence of shared mental models on team process and performance. Journal of Applied Psychology, 85, 273–283. Mikkelsen, A. (2005). Methodological challenges in the study of organizational interventions in flexible organizations. In A. M. Fuglseth & I. A. Kleppe (Eds.), Anthology for Kjell Grønhaug in celebration of his 70th birthday (pp. 150–178). Bergen, Norway: Fagbokforlaget. Mikkelsen, A., & Saksvik, P. Ø. (1999). Impact of a participatory organizational intervention on job characteristics and job stress. International Journal of Health Services, 29, 871–893. Mohammed, S., Ferzandi, L., & Hamilton, K. (2010). Metaphor no more: A 15-year review of the team mental model construct. Journal of Management, 36, 876–910. Murta, S. G., Sanderson, K., & Oldenburg, B. (2007). Process evaluation in occupational stress management programs: A systematic review. American Journal of Health Promotion, 21, 248–254. Nastasi, B. K., Hitchcock, J., Sarkar, S., Burkholder, G., Varjas, K., & Jayasena, A. (2007). Mixed methods in intervention research: Theory to adaptation. Journal of Mixed Methods Research, 1, 164–182. Nielsen, K., Cox, T., & Griffiths, A. (2002). Developing interventions for risk management: The Action Innovation Process. In C. Weikert, E. Torkelson, & J. Pryce (Eds.), Occupational health psychology: Empowerment, participation and health at work (pp. 135–140). Vienna, Austria: Allgemeine Unvallsversicherungsanstalt (AUVA). Nielsen, K., Fredslund, H., Christensen, K. B., & Albertsen, K. (2006). Success or failure? Interpreting and understanding the impact of interventions in four similar worksites. Work and Stress, 20, 272–287. Nielsen, K., & Randall, R. (2009). Managers’ active support when implementing teams: The impact on employee well-being. Applied Psychology: Health and Well-being, 1, 374–390. Nielsen, K., Randall, R., & Albertsen, K. (2007). Participants’ appraisals of process issues and the effects of stress management interventions. Journal of Organizational Behavior, 28, 793–810. Nielsen, K., Randall, R., Brenner, S.-O., & Albertsen, K. (2009). Developing a framework for the ‘‘why’’ in change outcomes: The importance of employees’ appraisal of changes. In P. Ø. Saksvik (Ed.), Prerequisites for healthy organizational change (pp. 76–86). Bussum, The Netherlands: Bentham Publishers.

Downloaded by [Arbejdsmiljo Instituttet] at 23:33 22 July 2012

PROCESS EVALUATION TOOL Nielsen, K., Randall, R., & Christensen, K. B. (2010). Does training managers enhance the effects of implementing teamworking? A longitudinal, mixed methods field study. Human Relations, 11, 1719–1741. Nielsen, K., Randall, R., Holten, A. L., & Rial Gonza´lez, E. (2010). Conducting organizational-level occupational health interventions: What works? Work and Stress, 24, 234–259. Nielsen, K., Taris, T. W., & Cox, T. (2010). The future of organizational interventions: Addressing the challenges of today’s organizations. Work and Stress, 24, 219–233. Noblet, A. J., & LaMontagne, A. D. (2009). The challenges of developing, implementing, and evaluating interventions. In S. Cartwright & C. L. Cooper (Eds.), The Oxford handbook of organizational well-being (pp. 466–496). Oxford, UK: Oxford University Press. Nytrø, K., Saksvik, P. Ø., Mikkelsen, A., Bohle, P., & Quinlan, M. (2000). An appraisal of key factors in the implementation of occupational stress interventions. Work and Stress, 14, 213–225. Øyum, L., Kvernberg Andersen, T., Pettersen Buvik, M., Knutstad, G. A., & Skarholt, K. (2006). God ledelsespraksis i endringsprocesser: Eksempler pa˚ hvordan ledere har gjort til en positiv erfaring for de ansatte [Good management practice during change processes: Examples of how managers have made it a positive experience for employees] (Rep. No. 567). København, Denmark: Nordisk Ministerra˚d. Patton, M. Q. (2002). Qualitative research and evaluation methods. Thousand Oaks, CA: Sage. Pettigrew, A. M. (1990). Longitudinal field research on change: Theory and practice. Organization Science, 1, 267–292. Randall, R., Cox, T., & Griffiths, A. (2007). Participants’ accounts of a stress management intervention. Human Relations, 60, 1181–1209. Randall, R., Griffiths, A., & Cox, T. (2005). Evaluating organizational stress-management interventions using adapted study designs. European Journal of Work and Organizational Psychology, 14, 23–41. Randall, R., Nielsen, K., & Tvedt, S. D. (2009). The development of 5 scales to measure participants’ appraisal of organizationallevel stress management interventions. Work and Stress, 23, 1–23. Richardson, K. M., & Rothstein, H. R. (2008). Effects of occupational stress management intervention programs: A metaanalysis. Journal of Occupational Health Psychology, 13, 69–93. Roen, K., Arai, L., Roberts, H., & Popay, J. (2006). Extending systematic reviews to include evidence on implementation: Methodological work on a review of community-based initiative to prevent injuries. Social Science and Medicine, 63, 1060–1071. Rossi, P. H., Lipsey, R. W., & Freeman, H. E. (2004). Evaluation: A systematic approach. London, UK: Sage. Rousseau, D. M., & Fried, Y. (2001). Location, location, location: Contextualizing organizational research. Journal of Organizational Behavior, 22, 1–13. Ruotsalainen, J. H., Verbeek, J. H., Salmi, J. A., Jauhiainen, M., Persternack, I., & Husman, K. (2006). Evidence on the effectiveness of occupational health interventions. American Journal of Industrial Medicine, 49, 865–872.

17

Rychetnik, L., Frommer, M., & Shiell, A. (2002). Criteria for evaluating evidence on public health. Journal of Epidemiology and Community Health, 56, 119–127. Saksvik, P. Ø., Nytrø, K., Dahl-Jørgensen, C., & Mikkelsen, A. (2002). A process evaluation of individual and organizational occupational stress and health interventions. Work and Stress, 16, 37–57. Schurman, S., & Israel, B. (1995). Redesigning work systems to reduce stress: A participatory action research approach to creating change. In L. Murphy, J. Hurrell, S. Sauter, & G. Keita (Eds.), Job stress interventions (pp. 235–263). Washington, DC: American Psychological Association. Semmer, N. K. (2003). Job stress interventions and organization of work. In L. Tetrick & J. C. Quick (Eds.), Handbook of occupational health psychology (pp. 325–353). Washington, DC: American Psychological Association. Semmer, N. K. (2006). Job stress interventions and the organization of work. Scandinavian Journal of Work and Environmental Health, 32, 515–527. Semmer, N. K. (2011). Job stress interventions and organization of work. In J. C. Quick & L. E. Tetrick (Eds.), Handbook of occupational health psychology (2nd ed., pp. 299–318). Washington, DC: American Psychological Association. Shannon, H., Robson, L., & Guastello, S. (1999). Methodological criteria for evaluating occupational safety intervention research. Safety Science, 31, 161–179. Shannon, H. S., & Cole, D. C. (2004). Commentary III making workplaces healthier: generating better evidence on work organization intervention research. Sozial Praventivmedizin, 49, 92–94. Steckler, A. B., & Linnan, L. (2002). Process evaluation for public health interventions and research. San Francisco, CA: JosseyBass. Taris, T., Kompier, M., Geurts, S., Schreurs, P., Shaufeli, P., de Boer, E. et al. (2003). Stress management interventions in the Dutch domiciliary care sector: Findings from 81 organisations. International Journal of Stress Management, 10, 297–325. Taris, T. W., & Kompier, M. (2003). Challenges in longitudinal designs in occupational health psychology. Scandinavian Journal of Work, Environment and Health, 29, 1–4. Vedung, E. (2006). Process evaluation and implementation theory. In E. Vedung (Ed.), Public policy and program evaluation (pp. 209–245). Piscataway, NJ: Transaction Publishers. Weick, K. E., Sufcliffe, K. M., & Obstfeld, D. (2005). Organizing and the process of sensemaking. Organization Science, 16, 409– 421. Weiner, B. J., Amick, H., & Lee, S.-Y. D. (2008). Conceptualization and measurement of organizational readiness for change: A review of the literature in health services research and other fields. Medical Care Research and Review, 65, 379–436. Yin, R. K. (1994). Case study research: Design and methods (2nd ed.). Thousand Oaks, CA: Sage. Original manuscript received February 2011 Revised manuscript received February 2012 First published online July 2012