from regulation to implementation: responsive assessment of ...

7 downloads 3861 Views 358KB Size Report
Responsive regulation has the potential to bridge the gap between ... Improving compliance with environmental regulation and enhancing results has been a.
FROM REGULATION TO IMPLEMENTATION: RESPONSIVE ASSESSMENT OF ENVIRONMENTAL COMPLIANCE AND ENFORCEMENT Orr Karassin† Oren Perez‡ Responsive regulation has the potential to bridge the gap between environmental policy and implementation. Key to realizing this promise is improved information and reflexive analysis of regulatory performance. Environmental compliance and enforcement indicators and an index which serves as a regulatory scoreboard can set the boundaries of a rational reflexive process and provide necessary information to regulators and the public. Although a process for the design, collection, and analysis of necessary data is a complex task, it can be managed rationally through a Goal Oriented Model. This model links indicator and index design to precisely stated goals and objectives of the regulatory agency or program.

Introduction Improving compliance with environmental regulation and enhancing results has been a longstanding theme in the regulatory debate. The battle has been primarily waged within the arena of environmental agencies that engage in daily struggles to achieve substantial environmental outcomes and reduce non-compliance with the law. While the sophistication of “instrument choice” literature has seen a steady increase, actual use of regulatory and enforcement mechanisms by regulators seem only modestly affected by theoretical insights. The recognition of the gap between theory and practice has re-emphasized the need for a different kind of regulation, which may bridge the gap between the sophisticated theoretical base regarding 'proper' regulatory intervention and the fallibilities of 'real' regulatory structures. Responsive regulation has been advocated as the proper solution to this dilemma.



Orr Karassin is a lecturer and the head of Environmental Law and Policy Program at Sapir Collage Law School, Israel. ‡ Oren Perez is an Associate Professor at Bar Ilan University Law School, Israel. * The authors would like to express their gratitude for the support of the Environmental Policy Center at the Jerusalem Institute for Israel Studies. This article has benefited from comments by the Dr. Michael Mason and the participants of the Geography and Environment Departmental Seminar at LSE as well as comments by Prof. Julia Black from the LSE Department of Law.

© Accepted for publication with the Journal of Environmental Assessment Policy and Management

In essence, theories of responsive regulation have in common the general assertion that regulators need to be acutely aware of the consequences of their actions and need to modify their actions accordingly (Farber 1993-1994; Baldwin & Black 2008). This awareness is rooted in the simple notion of the need to look backwards before moving forward. In other words what is required of regulators is a reflexive investigation of past achievements, aimed at modifying future action. How this investigation should be conducted has remained at times a vague notion. A recurring theme has been improving information available to regulators on the performance of regulatory mechanisms. However, the information needed or the process through which it should be collected, organized, and analyzed, has not been clearly defined and articulated. This article proposes a framework for dealing with the difficulties associated with selecting and articulating data. To this end, it proposes a framework of indicators and an index that provides the data required to assess the performance of environmental compliance and enforcement (ECE) programs. Information provided by indicators and the index provides a framework for a rational reflexive process in which regulators and the public can conduct a dialogue on regulatory implementation. The indicators and index are designed through a “Goal Oriented Model” (GO Model). The GO Model aspires to be both generic and modular. It is conceptualized in a manner that is widely applicable yet flexible and open to change. This approach is taken to avoid homogeneity, or a ‘one-size-fits-all’ approach with regard to performance measurement. At the same time, the GO Model seeks to provide a common theoretical and pragmatic basis, that would allow for the application of a core set of environmental compliance and enforcement indicators in different enforcement contexts. Application of the GO Model index has the potential to improve responsiveness in environmental regulation and narrow the gap between policy and implementation.

Theories of Responsive Regulation The Meanings of Responsiveness Responsiveness can be generally framed as a theory of regulatory change. More precisely, it is a prescription for the way in which regulation should change in order to adapt to the complex forces (both human and natural) affecting compliance and the success of enforcement efforts (Thornton et al. 2008; Thornton et al. 2005; Rechtschaffen 1998). Early notions of responsiveness grew as a reaction to what has been perceived as the failure of the command-

© Accepted for publication with the Journal of Environmental Assessment Policy and Management

and-control approach to accomplish its ambitious agenda of social engineering through purposeful regulatory intervention (Teubner 1983). The influential account of responsive regulation by Ayres and Braithwaite was based on a more modest vision claiming that “there are no best regulatory solutions”, just solutions that respond better than others to the configurations of social situations at any place at a certain point in time. Regulators need to respond by being attuned to several characteristics of the regulated society. First, regulators need to be attuned to the different motivations and objectives of regulated actors. The second thing regulators need to be attuned to is “industry conduct.” (Ayres and Braithwaite 1992: 5). Both motivations and conduct should affect the form of government intervention, escalating the response of government intervention according to an “enforcement pyramid” (Ayres and Braithwaite 1992: 35). Baldwin and Black have detailed a comprehensive account of what they refer to as ‘really responsive regulation’ (Baldwin & Black 2008). A central facet of this account is responsiveness to the logics of different regulatory strategies and tools (Baldwin & Black 2008: 70). Another central feature of ‘really responsive regulation’ is that a regulatory regime must be “responsive to its own performance.” This requirement is twofold: In the context of enforcement, such performance sensitivity requires that the regulator is capable of measuring whether the enforcement tools and strategies in current use are proving successful in achieving desired objectives. This requires one to evaluate whether the current rules and regulations are adapted to meet the regulatory body’s objectives. A second step must be modification of the tools and strategies employed by the regulator on the basis of the performance analysis. “Really responsive regulation” should place assessment at the core of regulatory activity; in doing so, it may facilitate its execution by creating ongoing systems and processes that produce and analyze relevant data (Baldwin & Black 2008: 73). One interpretation of responsiveness intimately linked to environmental regulation has been offered by Farber. In his description, rules governing the environment are depicted as constantly outdated. In agreement, he notes Orts’ assertion that “statutes enacted at a particular time in history are limited by the available knowledge of the time, especially scientific and technological knowledge. Because command-and-control statutes cannot ‘learn’ easily from changing circumstances and developing knowledge, they often fall short of achieving their objectives in a rapidly changing world” (Orts 1994-1995: 1338). This regulatory obsoleteness occurs in environmental regulation in particular, because of its reliance on environmental sciences and technology, which progresses through rapid changes (Farber 1993-1994). The solution to such possible regulatory stagnation is to make environmental regulation more adaptive to the social context, by learning from experimentation. Regulatory learning is necessary, given the inadequacy of information at any given time and the current predictions about the effects of certain policy initiatives. Consequently, environmental regulation should be

© Accepted for publication with the Journal of Environmental Assessment Policy and Management

provisional, subject to reevaluation as new information appears and old solutions are tested against experience (Farber 1993-1994: 798).

Means to an End: Information and Performance Assessment Naturally, different accounts of responsive regulation offer distinct mechanisms through which responsiveness may be facilitated. Even so, out of these emerges one central vehicle serving as a platform for others- that is, the development of informational mechanisms. Better information is a pivotal element of responsive regulation. Without information about what the current regulatory scheme is achieving, regulatory changes will be made in a chaotic or arbitrary fashion, based on anecdotal impressions. A proper informational platform is therefore a crucial condition for reflexive regulation. The capacity of regulators to establish information-based responsiveness has been hampered first by cognitive barriers - the formidable costs of extracting, sorting, processing and analyzing - and the vast amount of data associated with environmental regulation Himma 2007). Second, to the extent that regulators have been able to acquire the necessary information needed to improve the regulatory structure or operation, such change may not take place because of political or economic considerations. Regulators’ decisions may be driven by political and economic considerations which are not aligned with public goals, but rather reflect the self-regarding motivations of the public officials and the politicians who supervise them (Oates 2001; Keohane et al. 1998; Martimort 1999). The gap between common good considerations and the actual motivations of bureaucrats and politicians may also subvert the information collection and analysis process. Some of the accounts of responsive regulation detailed above refer more expressly to performance measurement than others. Farber concedes the fact that central to the notion of ‘dynamic’ responsiveness, is the action of monitoring the state of the environment as well as regulatory activities. Without improved monitoring efforts and evaluating the impacts or regulatory programs “(i)t makes little sense to continue pouring money into programs” with so little knowledge of their effectiveness. (Farber 1993-1994: 802). According to Ayres and Braithwaite’s description, responsiveness is largely a function of the ability of the enforcement agency to correlate its response to regulatory breach with the cooperative or non-cooperative behavioral attitude of the regulatee. If the enforcement agency is to achieve this, it will have to use a targeted analytic approach to assess regulatees’ motives and conduct. This will require producing and assessing information about current and past reasons for non-compliance. Baldwin and Black consider performance assessment and modification of regulatory strategies and tools accordingly, as key elements of 'really responsive regulation (Baldwin & Black 2008: 73).

© Accepted for publication with the Journal of Environmental Assessment Policy and Management

Even though the different versions of responsive regulation have emphasized (to different degrees) the importance of performance assessment as a basis for regulatory adaptation, none have proposed a workable and applicable framework for extracting, sorting, processing and analyzing information in a manner that promotes responsiveness of environmental regulation. Because so many of the core aspects of regulatory responsiveness depend on an improved reflexivity, or the ability to judge the performance of past regulatory practices, it is surprising how little scholarly attention has been directed towards this end. We argue that the use of environmental compliance and enforcement indicators may generate the needed information in such a way that supports responsiveness and reflexivity in the regulatory and enforcement process. Working with indicators may promote reflexivity by forcing the regulator (or the community) to carefully think about its objectives, to seek ways to measure them, to evaluate implementation efforts against these measures, and to continuously assess goals in view of data on past implementation. The way in which this reflexive process takes place can be either expert-based or participatory-based.

Assessing Regulatory Performance through Environmental Compliance and Enforcement Indicators Environmental Compliance and Enforcement Indicators Environmental compliance and enforcement indicators (“ECEI”) are quantifiable individualized measurements describing different aspects of the performance of an environmental compliance and enforcement regime. ECEI, can be highly diverse, as they need to respond to multiple dimensions, such as time, ecological issues, regulatory requirements, geographic area, and the social configurations of the regulatory environment. ECEI can be used for long-term surveillance, since they can be replicated and applied on an annual basis, thus capturing longitudinal changes. They can focus on a wide spectrum of ecological issues, allowing the regulatory framework to develop sensitivity to the complexity of the social and ecological forces affecting regulatory outcomes. ECEI can promote essential regulatory learning in three distinct forms: portrayal, evaluation, and transparency. Each of these functions is important in itself, but it is their combination that makes ECEI especially appealing. Portrayal essentially means that ECEI work to describe the performance of environmental compliance and enforcement system through quantification (and at times through qualitative description). ECEI are intended to generate a simplified ‘reality’ picture, through clear and concise design (Stahl 2005). This requires the adoption of uniform and persistent methods of data-gathering and analysis which stand at the core of portrayal. ECEI provide an ordered portrait of environmental enforcement performance and concisely convey information to policymakers and the public. © Accepted for publication with the Journal of Environmental Assessment Policy and Management

ECEI facilitates the evaluation of enforcement efforts by providing clear yardsticks in various dimensions, thus pointing to potential changes in the structure of the current regulatory structure. Such changes can focus both on improved applications of enforcement and compliance tools and on changes to the regulatory framework itself. Indicators showing the types of tools used, the magnitude of use and the circumstances in which they are employed, juxtaposed against behavioral outcomes and environmental results, can in turn change the way in which enforcement tools and sanctions are used. Indicators provide a necessary platform for thinking about the relationships between enforcement policy, behavioral outcomes, and environmental results. ECEI can therefore aid the decision whether to employ criminal or administrative sanctions or when to use random rather than targeted inspections. ECEI can also aid in judging the efficiency of current funding and may suggest the required shift in resource allocation from one enforcement method to another (e.g. allocating additional funding to compliance promotion program at the expense of the criminal prosecution program) (Stahl 2005). ECEI may help in assessing the current institutional structure of the enforcement program and whether it is suited to realize program goals or if the agency has the necessary powers and tools to address its goals (McElfish & Varnell 2006). ECEI support a reflexive consideration of the structure of the regulatory scheme itself, by shedding light on problems that need to be addressed. They may induce change by presenting information about failure of past policies. In portraying the performance of the enforcement system in a certain way, they frame policy decisions and set the agenda for change (Kingdon 1995: 95). Admittedly, indicators do not guarantee rational planning and decision making, as of themselves however the information gathered by indicators is a prerequisite for such a process. Assessments made from indicators about the environmental and human consequences of enforcing regulation sets the stage for applying the reflexive changes needed in regulation (Fiorino 1999; Orts 1994-1995). These changes may include, for example, the alteration of the sanction (or the level of the eco-tax) that is authorized by regulation, if the indicators show that they fail to produce the required results. Second, regulatory revision may include the addition of enforcement tools and authorities, when the tools in use cannot improve effectiveness. For example, when the regulator or courts do not possess the power to impose administrative or civil fines or when there is no arrangement for ordering a supplemental environmental project, damage caused by the offence cannot be remediated. The third type of change is the most encompassing and occurs when the entire structure of the regulatory scheme must be changed. For example, when technological standards were found to inhibit technological innovation, the entire structure of command-and-control regulation would need to be changed (e.g., employing emission standards instead). Another example would be when probability of detection by the regulator is too low to produce deterrence.

© Accepted for publication with the Journal of Environmental Assessment Policy and Management

Finally, ECEI work to encourage commitment and transparency with regard to enforcement actions. Indicators increase the transparency of enforcement means and achievements towards the general public (OECD 2003). By extending the amount of data available for public scrutiny, ECEI contributes to participatory democracy and facilitate a pluralistic environmental debate in which “a wider set of views, analysis, values and policy options are considered” (Esty 2004). Making enforcement data open for public scrutiny may also facilitate private enforcement (either through courts or through consumption and investment decisions) (Levy 2002). These actions may have a substantial effect on the efficacy of the regulatory regime and the behavior of polluting firms (Kagan et al. 2003). They may also encourage a sense of public ownership over the enforcement program and induce community support (Stahl, 2005). ECEI can also facilitate transparent reporting to international organizations. The OECD has made several recommendations encouraging states to increase the use of environmental indicators. The goals attached to this by OECD have been to report, monitor progress, and evaluate strategies for improving environmental performance (OECD 1979; OECD 1991; OECD 1996; OECD 1998). By adding certain dimensions to existing environmental indicators, ECEI can be used in comparing environmental policy and performance of states. This would require further research of which this article aims to be part. .

Distinguishing ECEI From Environmental Indicators What is the difference between ECEI and the wide range of environmental and sustainability indicators (“ESI”) currently used by national and international environmental agencies? Whereas ESI focus on diagnosing the ‘pressures’ and the ‘state’ of the environment, ECEI concentrate on the enforcement and compliance process by delineating inputs, outputs, outcomes, and environmental results. This distinction highlights a major shortcoming of ESI. In their current form, ESI cannot provide a measure of the success and efficacy of enforcement efforts (U.K E.A 2005A). The design of ECEI, in distinction, is driven by enforcement and compliance theory and reflects enforcement goals, behavioral strategies, and the aims of enforcement tools. It is clear that in designing enforcement indicators it is wrong to rely simply on the state of the environment as a single proxy. A more complex and theory-driven approach is needed for several reasons. First, the causal link between enforcement and compliance efforts and the state of the environment is not direct. There are many variables which may influence environmental behavior of firms and individuals and the state of the environment, irrespective of enforcement actions. For example, a reduction in CO2 emissions may be a product of economic recession rather than regulatory intervention; unanticipated technological innovation may change the industry's average abatement costs leading to environmental improvement; consumers' preferences regarding environmental values may change – supporting or detracting from © Accepted for publication with the Journal of Environmental Assessment Policy and Management

environmental efforts. These causal factors may affect the ‘state’ of the environment and the ‘pressures’ it faces, in ways unlinked to the direct effects of regulatory intervention. Second, substantial time may elapse between the application of enforcement measures and changes made evident in the environment. Most environmental consequences of enforcement will not occur directly after enforcement programs and measures have been put into place. Rather, environmental changes are more gradual and may require some of time to evolve. This requires regulators to take a long-term view of the regulatory objectives and to develop institutional patience (and capacity to withstand pressures to cling to short-term objectives). In contrast, behavioral changes may occur more rapidly, and therefore should be monitored in addition to environmental changes as an additional measure of the efficacy of regulatory intervention.

Efforts to Advance ECEI Although the use of performance indicators in the public sector has been widespread since the 80s (Carter 1991, Hood 2007), the story of ECEI is relatively recent. Despite the many potential benefits of using ECEI and their acknowledged importance in the strive to move from pure policy concerns to improved implementation (Sparrow 2003: 192,272 273), only modest progress has been made in articulating methods for actual ECEI design. These efforts have been spearheaded by a handful of environmental agencies and international organizations. This Section and Section D, will delineate the core progress made and will illustrate what is lacking in one of the most comprehensive analytical systems developed thus far. One of the first international references identifying the need to develop better methods to analyze and assess environmental compliance and enforcement was made in Agenda 21. Article 8.21 of Agenda 21 calls for the establishment of integrated strategies to maximize compliance with environmental laws and regulations through improved “institutional capacity for collecting compliance data, regularly reviewing compliance, detecting violations, establishing enforcement priorities, undertaking effective enforcement, and conducting periodic evaluations of the effectiveness of compliance and enforcement programs”. One of the first institutional responses to this call was a collaborative project initiated by the OECD and the International Network for Environmental Compliance and Enforcement (“INECE”). The project sought to examine the experience of a few leading countries that had already begun experimenting with ECEI, focusing primarily on work done by the U.S Environmental Protection Agency (“EPA”) (INECE 2008: iv) and by Environment Canada. More recent contributions have been made by the U.K Environment Agency (U.K E.A, 2005A).

The INECE Framework for Developing ECEI © Accepted for publication with the Journal of Environmental Assessment Policy and Management

Because the work done by the INECE on ECEI has been so central and influential, it is worthwhile to briefly consider it. Below we examine the key contribution of INECE and offer some critical comments, which form the basis for our proposed theoretical framework. The contribution of the INECE to ECEI is largely presented in the latest draft version of the “Performance Measurement Guidance for Compliance and Enforcement Practitioners” (“the Guidance”) (INECE 2008). In the Guide, a three-tier model is suggested for developing and implementing ECEI. The first tier in the model is described as “identifying potential indicators”, the second is “developing indicators through designing and testing”, and the third is “using the indicators to improve program performance and enhance accountability to stakeholders”. The process of identifying potential indicators in tier 1 is based on the ‘Logic Model’. The logic model, as it is used by INECE, classifies four distinct possible types of indicators that are bound by “logically linked stages of the program”. ‘Inputs’ come first, depicting resources invested (i.e. program funding, personnel, information and monitoring technology). This is followed by ‘outputs’, measuring activities undertaken (i.e. assistance provided, inspections conducted, enforcement actions taken, and fines assessed and collected). Results of the activities are measured by two types of indicators. First, indicators called ‘intermediary outcomes’ primarily measure behavioural changes (i.e. greater understanding of how to comply, improved environmental management practices, increased deterrence, normative change). Second, indicators focusing on ‘final outcomes’, seek to capture environmental impacts (i.e. reduced emissions, improved environmental ambient standards) (INECE 2008: 14). For simplicity we will refer to ‘intermediary outcomes’ simply as ‘outcomes’ and ‘final outcomes’ as ‘results’. There are two key difficulties with the INECE’s approach to indicators’ development. First, using the logic model does not actually assist in identifying potential indicators. The logic model provides a useful taxonomy, which may help in categorization, but not in the initial process of developing actual, measurable indicators. Second, the implicit assertion underlying the logic model, that inputs, outputs, outcomes, and results constitute a linear causal chain should be viewed with caution. In effect, because of the highly complex nature of any regulatory intervention, and the plethora of possible intervening variables, the causal relationship between the different elements identified by the logic model is likely to be highly vague and contestable (Perez 2008). The causal linkage between inputs and final outcomes (environmental results) is very difficult to ascertain. It defies any simple, a priori projections, and requires contextual, in-depth study. Thus, regardless of whether the project fails or succeeds in achieving its objectives (e.g. reduction of CO2 emissions), the question of the contribution of the program and the various inputs and interim outputs leading to the outcome, requires non-trivial analysis (McLaughlin & Jordan 1999).

© Accepted for publication with the Journal of Environmental Assessment Policy and Management

INECE has also developed a set of general principles for the selection of indicators. The principles include, among others, ‘diversity’ and ‘reflection of management priorities’ (INECE 2008: 28). The model disregards possible contradictions between the proposed selection principles, however, and neglects to offer a meta-principle that can assist in guiding the selection process in case of irreconcilability. This hampers the potential organizing value that the various principles can offer to the indicator selection process. Instead, it is suggested that establishing effectiveness as a guiding principle allows retaining diversity while establishing coherence in the selection process. Effectiveness which is judged by the degree to which goals are met leads to a disparity of goals. Effectiveness is a natural organizing principle, because in essence, all programs ultimately strive to achieve effectiveness (with a variety of goals in mind). Also, effectiveness coincides with notions of responsiveness and reflexivity that require the careful articulation of goals and objectives by the regulator, in addition to monitoring their attainment. In similar fashion, the next step proposed by INECE of formulating “criteria to guide the process of indicators selection” also remains somewhat obscure. The Guidance mentions relevance, credibility, transparency, functionality, feasibility, and comprehensiveness as possible criteria (INECE 2008: 15). Even so, no clear distinction is made between the guiding principles and the criteria. Also, no reference is made to the integration of these criteria and their relative importance in the overall rating of the selected indicators. We suggest that criteria be used to rank indicators from an initial set which have “passed” the guiding principle test. Criteria guided by implementation concerns serve as a fine sieve, filtering only the most applicable and central of indicators. One last important feature of the INECE model is the analysis and changing of indicators within the process of implementation. Some form of assessment is included in both the second and third tiers of the indictor design (INECE 2008: 19, 22). But the INECE guidelines do not clarify when indicator reassessment should occur, what exactly should be reassessed, and how. It is suggested that reassessment is instigated by one of three triggers: first, changes in program goals or regulation; second, development in scientific information related to indicators already applied or influencing the regulatory field; and third, any lessons learned by indicators already applied. Changes in program goals or objectives may be driven by changes in regulation, administrative considerations of program managers, or political considerations of their supervisors. Regardless, as program goals determine what constitutes effectiveness, any change in program goals or objectives (both in substance and degree of importance) must be reflected in indicators. Development in scientific information may prompt the need for changing indicators and shifting how and what is being monitored. For example, new information might shed light on substances or industrial processes that cause poor health. This may require ECEI attention to shift to the responses by industry made with regard to the substances or processes highlighted. Finally, implementing indicators teaches administrators about problems encountered in data

© Accepted for publication with the Journal of Environmental Assessment Policy and Management

gathering and methodologies. These should be assessed and the indicators should be changed accordingly.

Paving a New Path for ECEI- the Goal Oriented Model (GO Model) Our critique of the INECE model paves the way for our alternative approach for the design and implementation of ECEI. We have termed our approach the Goal Oriented Model (“GO Model”). This section describes the GO Model, from both theoretical and practical perspectives and outlines an index and a tentative list of indicators, devised in accordance with the model’s guidelines. The first feature of the GO Model is that it purposefully links indicators’ design to enforcement and compliance theory and to the goals of the regulatory project (in contrast to the INECE indicators which are mainly data-driven). As outlined in Figure 1 below, the identification and design of indicators depend on the goals and objectives of the enforcement and compliance program. The first step towards the design of ECEI therefore, involves a clear conceptualization of regulatory goals and the ways in which they can be measured. This is by no means a trivial matter. Regulators frequently operate in an institutional environment of poorly defined, vague, or competing policy objectives. The determination of the goals and objectives of the compliance and enforcement policy is addressed in three stages. The first stage relates to the guiding principles of environmental policy elaborated in section A; the second stage comprises enforcement strategy objectives detailed in section B; and the third stage involves reference to the objectives of enforcement and compliance tools constrained by their inner logic, as discussed in section C (See Figure 2). The layers are intertwined, as the broad goals of environmental policy sought after by maintaining enforcement and compliance, must impact the strategies used by enforcers. In a similar fashion, enforcement tools should be used to attain enforcement strategies. These should be used in a way that enhances the logic of the tools and conditions for their effective employment. The GO Model uses multiple criteria decision analysis (“MCDA”) to rate potential indicators. Based on normative decision theory, MCDA assumes decisions are made rationally and uses tools that aid coherent decision-making in the face of complexity. MCDA facilitates the identification and articulation of criteria for indicator selection. Each indicator is scored for performance under each criterion. The criteria are then assigned a weight in proportion to their importance to the overall decision. Score and weight are then integrated to derive the overall value for each alternative. Finally, indicator selection is examined and results are checked against sensitivity assessment. MCDA is used in two stages in the GO Model. First, indicators are selected and ranked according to their ability to articulate the goals of environmental policy, enforcement strategy objectives, and the logic of enforcement and compliance tools. This process of identifying

© Accepted for publication with the Journal of Environmental Assessment Policy and Management

indicators may still result in a very wide range of indicators, too wide to be implemented for practical constraints. The GO Model then proposes a second stage of reducing the initial set and producing a limited index. This stage is further described in section 4. MCDA does not disregard subjective opinion, but it ensures the decision process is open, transparent, and provides an audit trail. Scores and weights are explicit and developed according to established techniques. It can aid communication between decision-makers and stakeholders facilitating the inclusion of experts in the indicator selection process (Mendoza & Prabhu 2000). An MCDA process does not necessarily imply the participation of decisionmakers alone. It may easily accommodate the participation of various interest groups, such as environmental NGOs, grass roots organizations, experts, industry groups, and others, if the process initiator allows it. Admittedly, many enforcement agencies may not be open to public participation in a process which determines their goals and objectives, but broad participation has its advantages, as it is likely to add credibility and public acceptance to the process. To support continual learning and avoid ossification, the GO Model incorporates continual reassessment of the index considering both the substantial aspects (such as goals and targets) and the design aspects (such as the methodologies used). This is done based on a continually evolving knowledge base (See Figure 1). Not only should the index itself be regularly reviewed, but its findings can provide the ground for re-examining the structure and substance of the regulatory regime. First, it may provide the basis for inferring required changes to program administration: in use of enforcement tools, funding and administrative mechanisms and organizational structures. Second, the assessment of the index and individual indicators, lays the necessary foundation for structuring changes to ECE policy, goals, regulatory authorities, structure, and type (See Figure 1). In essence, the index and individual indicators can be used as an experimental laboratory to explore the goals and the effectiveness of strategies and tools in use, promoting continuous regulatory learning. These outcomes can be realized by interpreting indicator results in light of enforcement and compliance theory and doctrine. The GO Model is purposefully sector and theme-specific as it cannot be generalized to cover the entire environmental enforcement regime. This is due to the fact that most indicators reference specific regulation. Consistent with Ayres and Braithwaite’s emphasis on the social context of the regulatory project, the GO Model seeks to respond to potential variations in environmental, organizational, and social conditions of the regulatory regime. As indicated in Figure 1, one of the first steps of the GO Model is to identify enforcement and compliance tools in use as well as potential tools which have not been utilized thus far. This is a preliminary step towards checking the effectiveness of ECE tools in their specific contexts.

© Accepted for publication with the Journal of Environmental Assessment Policy and Management

% .  @     #  ! # "  #   ( ! @)A #.             !   "   !  #   $        "      %  % " &           1    ) ""  "       0  

Suggest Documents