The Value of Performance Indicators to Spatial Data Infrastructure ...

2 downloads 1665 Views 171KB Size Report
Nov 6, 2006 - In infrastructure management, Performance-Based ..... Where outputs are viewed as the products or services produced through program.
GSDI-9 Conference Proceedings, 6-10 November 2006, Santiago, Chile

The Value of Performance Indicators to Spatial Data Infrastructure Development Garfield Giff Ph.D. OTB Research Institute for Housing, Urban and Mobility Studies Delft University of Technology Delft, The Netherlands [email protected] Abstract The successful implementation of the next generation of spatial data infrastructures (SDIs) will, in part, depend on the ability of SDI program coordinators to comprehend and report on the success or failure levels of the previous generations of SDIs. This is important from both a funding and an effectiveness perspective. From an effectiveness point of view, the performance of a system can only be improved if it is measured. While, in terms of recapitalization, the attraction of new investment (both private and public sector) depends on past, present, and projected performance of the infrastructure. Therefore, it is imperative that SDI coordinators develop metrics to assess, evaluate, and report on the performance of their initiatives. If the previous generations of SDIs are to attract future investment they must demonstrate that SDIs can and are achieving their objectives. This requires that the objectives of an SDI are well defined and clearly stated. The assessment of the performance of an SDI should be in a format that illustrates both the economic and social benefits of the infrastructure. This is necessary, since, in some cases an SDI is not only required to support the provision of public goods but also to generate surplus on investment. Similar to other infrastructure, assessing the performance of an SDI is a very complex task. This task is difficult because of the multifaceted and intricate structure of an SDI, coupled with the qualitative and external benefits it generates. In infrastructure management, Performance-Based Management is a tried and proven technique used by managers to evaluate, demonstrate and improve on the performance of an infrastructure. Therefore, based on the concept that an SDI is an infrastructure, it should be possible for SDI program coordinators to apply performance-based management to an SDI in order to assist in, evaluating, analysing and reporting on its performance. An important step in the application of the performance-based management style to SDI sustainability is the development of Performance Indicators (PIs). PIs can be simply defined as some quantifiable measures of the degree to which the objective(s) of a program/initiative is being achieved. It is important for SDI program coordinators to develop PIs—based on their objectives—since, PIs can be used to measure, articulate, and display the performance of an SDI. In an economic sense, PIs can be used to illustrate to the stakeholders the performance level—in terms of effectiveness, efficiency and reliability—at which the SDI is operating. The paper will explore the above concepts in terms of an SDI.

Introduction The SDI community has long proclaimed the benefits to be gained from the implementation of SDIs throughout the information society. However, to date there is no methodology(s) in place to effectively measure these benefits and to justify the resources expended on the implementation of SDIs. Today, the pressure to justify expenditure on SDIs is growing due to changes in government fiscal policies, movement in world economies towards market-oriented economies, and the fact that the first generation of SDIs now requires recapitalisation and reengineering. The need for recapitalisation and reengineering of these SDIs will require new funding and thus, new funding models to cope with the changes (Giff, 2005). Research indicates that resulting from the changes in government policies and these new funding models is a metamorphosis of the processes for accessing funds for implementation and maintenance of an SDI. A significant feature of this new process is that governments—the main financiers of SDI—are now demanding that methods are in place to justify the implementation of an SDI before funds can be accessed (Stewart, 2006). One technique that could be used to justify the existence of SDIs is the evaluation of their performance. However, measuring the performance of an SDI will be a very intriguing task. In that the performance of an SDI cannot simply be measured in terms of profitability or generic financial viability (Lawrence, 1998). This is because SDIs are in nature complex with monopolistic tendencies and therefore, will have complex performances (Lawrence, 1998; Rajabifard, 2002; Giff and Coleman, 2003b; and De Man, 2006). That been said, the need to evaluate the performance of SDIs to remove intuitive decision-making regarding their implementation is still paramount. The solution to this problem may be the application of a technique—widely used in infrastructure evaluation—of measuring performance through the relationship amongst inputs, outputs and outcomes (Lawrence, 1998). This relationship can be illustrated with the help of Performance Indicators. That is, the application of metrics to a program in order to provide performance information pertaining to its outputs and outcomes with respect to inputs and objectives. If these performance indicators are to provide precise and accurate performance information about an SDI then, they must be designed and implementation within the broader framework of a performance oriented management system. One such performance oriented management system is the Performance-Based Management style. This is a procedure that has proven successful as a tool to support the creation of a more favourable environment for infrastructure evaluation and therefore, may be applicable in SDI evaluation. The main purpose of this paper is to increase the awareness of the SDI community of the value of performance indicators to the success and sustainability of SDI implementation. This will be achieved through exploration of the application of the performance-based management style to the operation and evaluation of SDIs. The paper then focuses on the importance of evaluation to SDI development and sustainability. This is followed by the introduction of performance indicators as a possible tool to assist in the evaluation of an SDI. The focus will then shift to the development of a conceptual framework to assist in the design of performance indicators for SDI evaluation. A preliminary framework is then presented and the paper closes with an overview of

recommendations on additional research necessary to facilitate further development of the framework. The Application of Performance Based Management to SDI To be successful we must know our strengths and weaknesses. Knowledge about the strength and weakness of an organisation or a program can be obtained through measuring its results (i.e., its success or lack of success). Measuring the results of a process is one of the most systematic means of differentiating success from failure and thus, identifying the strength and weakness of the process. If the success or failure of a process is not identified it makes it more difficult for an organisation to learn from the activities of the process and improve on them. In the context of an infrastructure this implies that the activities of the infrastructure must be measured and managed if the infrastructure is to operate at an optimum level (CMIIP, 1995). Performance Based Management (PBM) is one technique that facilitates infrastructure managers to operate an infrastructure in such a manner that its strengths and weaknesses are constantly identified, analysed and managed (GSA, 2000). PBM SIG (2001) defines Performance Based Management (PBM) as “…a systematic approach to performance improvement through an ongoing process of establishing strategic performance objectives; measuring performance; collecting, analysing, reviewing, and reporting performance data; and using that data to drive performance improvement.” That is, PBM consist of management processes that translate business strategies into actions at the operational level (where they can be evaluated for best value), develop and apply measuring tools, analyse and report the results, and apply these results to improve performance (Blalock, 1999 and GSA, 2000). The PBM style is an iterative process that involves at least six key processes (Environment Canada, 2000) [See figure 1]. These processes facilitate the monitoring and analysis of the strength and weakness of a program in a systematic manner. The information gained from the monitoring and analysis phase is then used to constantly improve the quality of the program, as well as justifying continuous investment in the program.

Process 1 The definition of the organisation’s mission, goals, and objectives.

Process 2 Id. key performance areas (i.e., areas critical to the understanding and evaluation of the process).

Process 3 The development of an integrated performance measuring system.

Process 6

Process 5

Process 4

The application of performance information to system improvement.

Analyse, review and report on performance.

The development of data collection system(s).

Figure 1: Six Key Processes of the PBM Style (Adopted from NPR, 1997 and PBM SIG, 2001) Figure1 illustrates six key processes involved in the application of the PBM style to the operation of an organisation or a project. The first process involves the definition of the organisation’s visions, goals, and objectives. This is the strategic phase of the PBM style where the goals and objectives are expressed in such a manner that they are measurable (i.e., written in the form of action statements). This aspect of the PBM style may be considered as the development of a performance framework. In the second process an analysis of the performance framework is carried out to determine the areas of the program that are most crucial to the understanding and measuring of the program’s success. This is necessary since it may not be possible or practical to measure all aspects of a program. Therefore, selecting key operation areas of an organisation may be vital to the success or failure of the evaluation. The third process is a key aspect of PBM and will be the main concern of this paper. In this phase what is to be measured and how it is to be measured is decided. A significant outcome of this phase is the development of indicators—performance indicators—to provide information to assist in the determination of the success or failure of the project. The fourth process involves the development of policies and systems to efficiently and effectively collect the data necessary to assess performance. In process five the performance data collected in the previous process is reviewed, analysed, and communicated to the decision-makers. Process six, the final process of the program is the application of the performance information to the improvement of the operation of the project/organisation. The preceding paragraphs introduced the concept of PBM style to infrastructure management and reviewed the processes involved in the implementation of a PBM program. However, it is expected that the question still remains on the mind of the readers what are the benefits of the PBM style to the efficient implementation of an SDI? Therefore, this paragraph will attempt to answer this pressing question. Similar to any other infrastructure an SDI will benefit from the application of the PBM style in terms of its ability to analyse and improve the activities of an SDI to meet the changing needs of the stakeholders, the users and the general implementation

environment. Also, the information resulting from the analysis can be used to justify investment or the need for additional investment in SDIs. Additionally, the implementation of an SDI can benefit significantly from the following features of a PBM program identified by McNamara, 1999 and PBM SI, 2001: • • • •

• • •

The management tools that will be used to help understand the processes—Identifying what we know or revealing what we don't know, the problems and whether decisions are based upon well-documented facts and figures or on intuition and gut feelings. The structured approach of focusing on strategic performance objectives— The focus of this program is on results and not activities. The ability to communicate accurately the performance of the organisation to upper management and stakeholders. The linkage between performance and expenditures—At the beginning of the cycle, PBM assist in identifying goals to be accomplished and the resources necessary to accomplish them. At cycle end, it shows what was actually accomplished and the resources actually used to achieve the results. The ability to involve all stakeholders in the evaluation process. The ability to perform both customer need assessment and customer satisfaction surveys—Is the organisation meeting customer requirements? How do we know that we are providing the services/products that our customers require? The provision of indicators—They assist in identifying where improvements need to be made and if improvements are actually made.

The above features of the PBM style mould it into a tool capable of assisting immensely in the effective and efficient implementation of an SDI. Therefore, it is recommended that the SDI community research this management style for the implementation and maintenance of the next generation of SDIs. (For more in-depth reading on PBM style please see Hale, 2003; PBM SIG, 2001; GSA, 2000; and NPR, 1997).

SDI Evaluation and Performance Indicators (PIs) The majority of the visible Spatial Data Infrastructures (SDIs) are now near the completion stage of their first phase (See Onsrud, 2000; Crompvoets et al., 2004; and Kok and van Loenen, 2005 for a list of visible SDIs). The consequence of this fact is, these SDIs now require or will soon require reengineering and recapitalisation in order to transform them into SDIs capable of providing the services demanded by current and future users. Recapitalisation and reengineering of these first generation SDIs in today’s stringent economic and political climate will require evaluation in terms of both efficiency and effectiveness. An evaluation of efficiency refers to the measuring of an SDI to determine if it is achieving its objectives in the most economical manner. On the other hand, an effectiveness evaluation refers to the measuring of an SDI to determine if it is achieving its goals (i.e., desired outcome), along with, having the predicted impact on society. That is, an efficiency and effectiveness evaluation of an SDI will set out to determine whether or not the SDI is operating at an optimal level, and that, the outcomes—if any—are having the expected levels of impact. SDI Evaluation for Recapitalisation First generation SDIs (i.e., the first set of SDI initiatives [Masser, 1998 and Giff and Coleman, 2003a]) evolved mainly out of National Mapping Agencies (NMA), spatial information oriented government and quasi-government agencies, and special projects that relied heavily on spatial information. This resulted in the first generation of SDIs being funded from the budgets of these agencies or through one-time grants (Giff and Coleman, 2003b). This type of funding arrangement will be inadequate for the implementation and maintenance of future generations of SDIs (Giff and Coleman, 2002 and Giff, 2005). The implementation of the next generation of SDI will require structured long-term funding arrangements. Long-term funding for the next generation of SDIs will be provided by, mainly the public sector supported by public-private sector partnership with sole private sector funding playing a lesser role (Giff, 2005). To assess these structured long-term funding arrangements in today’s economic climate, SDI program managers must provide information on the efficiency level of the first generation of SDIs, as well as, demonstrate that metrics are in place to measure the performance of the next generation of SDIs. This type of information is necessary as public sector funding policies converge on that of the private sector. In that, both sectors are now moving towards funding more performance-based initiatives (CMII, 1995; OAG, 1995; and PSMO, 1997). This movement even affects programs producing or facilitating the production of public goods (Bullen, 1991). New public sector funding policies now require these organisations to perform at an optimal level and generate a surplus or at least break-even. To demonstrate this feat to the financiers, organisations must have in place metrics to measure their performance. Therefore, if the next generation of SDIs are to receive any significant (structured) funding from government they must be capable of indicating their current level of efficiency, as well as, have in place indicators to measure future levels of efficiency. The above statement is supported by the number of public sector funding regulations in place that demand performance-based indicators as a necessity for the release of funds. Some examples of these regulations are as follows: •

In Australia—The Financial Administration and Audit Act, the Public Sector Management Act and CEO Performance Agreement and Assessment.

• • •

In Canada—The Federal Accountability Act and Action Plan, Results-based Management and Accountability Framework, Treasury Board Secretariat Evaluation Policy and Infrastructure Canada’s Report on Plans and Priorities In Great Britain—Public Service Agreements, Performance Assessment framework in the NHS and social services, and NHS Wales Performance Management framework In United States—The Information Technology Management Reform Act, Government Performance and Results Act, National Performance Review, and the Federal Acquisition Act.

SDI Evaluation for Reengineering A reengineering evaluation is an assessment that focuses on the effectiveness of the SDI. That is, the evaluation is done to determine the levels of output, outcomes and their impact on the community. Where outputs are viewed as the products or services produced through program activities and made accessible to the users, while, outcomes are the effects of the outputs on the users (i.e., the results of the interaction between outputs and users. Impact on the other hand is the long-term effects of outcomes on the users or the wider community. A reengineering evaluation is important as it informs the stakeholders whether or not SDIs are achieving what they sets out to do, as well as, demonstrate their relevance to society. It provides a clear picture of the strengths and weaknesses of the SDI with respect to its objectives. A schema of the strength and weakness of a system is of utmost important to the upgrading of its processes and functions. Again, the move of the current SDI initiatives towards the next generation of SDIs will require that the they are evaluated for the purpose of reengineering. Information from this type of evaluation will provide stakeholders with a schema of what aspects of the SDI need to be redesigned in order to cope with expected demands from the spatial information community. Also of importance is the fact that the present political climate demands that SDI program coordinators clearly illustrate to the public that SDIs are providing the services they promised [i.e., that they are effective] (Stewart, 2006). The demand for this type of information—performance reporting—is also evident from the different public sector regulations listed in the previous section. The challenge therefore, is not only for SDI stakeholders to identify the strengths and weaknesses of their SDI in order to improve on performance but also, to be capable of adequately reporting it to the financiers and the public where applicable. To achieve these feats the stakeholders must have access to information that will facilitate them in better understanding the issues involved in evaluation (Bullen, 1991). An available instrument capable of providing stakeholders with accurate evaluation information is a set of performance indicators. Performance Indicators Previous sections introduced the concept of evaluating an SDI using indicators as a yardstick to measure performance. Indicators used in an evaluation exercise are normally called Performance Indicators (WHO, 2000). They are an integral part of process 3 of the PBM style and may be defined as

“…the measurement of a piece of important and useful information about the performance of a program expressed as a percentage, index, rate or other comparison which is monitored at regular intervals and is compared to one or more criterion.” (OPM, 1990) Based on the above definition a Performance Indicator (PI) is a metric that measure the degree to which key functions (objectives) of an organisation are being achieved. They are usually developed with respect to the organisation’s/program’s targets (i.e., the goals and or objectives). A PI can either be a quantitative or a qualitative measure of performance (Environment Canada, 2000 and WHO, 2000). This is in support of the outputs, the outcomes and impacts of a program—in particular infrastructure programs—which can either be quantitative or qualitative in nature. Quantitative PIs are composed of numeric values and a unit of measure. The numeric value provides the PI with magnitude (how much), while, the unit of measure gives the numeric value meaning (TRADE, 1995). In addition, a quantitative PI can be a single dimensional unit (e.g., meters or dollars) or it can be a multidimensional unit of measure (e.g., a ratio). PIs represented as a single dimensional unit are usually used to compare or track very basic functions of an organisation; while, for more complex information collection multidimensional PIs are used. For example, the unit of measure applied to a PI measuring the amount of electricity used in the production of a map file could be ratio of kilowatts per map file. Qualitative PIs are usually used to measure the socio-political outcomes or impact of a program (e.g., user satisfaction with a particular product). However, although the outcomes or impact of these programs are usually qualitative, quantitative information is what is required by governments and funding agencies in order to ensure that cognitive decisions are made regarding investment in public goods (CMIIP, 1995 and WHO, 2000). This implies that PIs are normally required for a comparative purpose and therefore, researchers recommended that a quantitative value be placed on a qualitative PI (CMIIP, 1995and Lawrence, 1998). This is an important aspect for SDI evaluation since a significant number of the outcomes and impacts of an SDI are qualitative in nature. The transformation of a qualitative PI to a quantitative PI is a complex task, which is very dependent on the process or processes to be measured. The generic transformation is usually to employ some form of range or ranking scale (WHO, 2000). However, ranking scales are not always suitable for infrastructure evaluation and therefore, more appropriate quantification for SDI PIs needs to be developed. In addition, to being quantifiable PIs should have the following characteristics (PSMO, 1997; WHO, 2000; CHN, 2001; and Jolette and Manning, (2001)): • • • • • •

Specific—Clearly define and easy to understand Measurable—Should be quantifiable in order to facilitate comparison with other data Attainable/Feasible—Practical, achievable, and cost-effective to implement Relevant –True representation of the functions they intend to measure. Should be capable of providing factual, timely and easy understandable information about the function(s) Timely and Free of Bias—Information collected should be available within a reasonable time-frame, impartially gathered, and impartially reported Verifiable and Statistically Valid—Should be scientifically sound with possibilities to check the accuracies of the information produced based on sample size

• •

Unambiguous—A change in an indicator should result in clear and unambiguous interpretation. For example, it should be clear whether or not an increase in the value of PI represent an improvement or reduction in the item measured Comparable—Information should show changes in process over time or changes between processes. This may require quantification of the PI

PIs with the majority of the above characteristics are referred to as robust indicators and are therefore more likely to be intelligible for their intended use (Audit Commission, 2000). However, in real life situation it may be difficult to create PIs that fulfils precisely all the criteria listed above, therefore, trade-off may be necessary when developing PIs. Although trade-offs are expected PIs can still be effective if they are developed within the organisation’s mission, goals and management style. In general, PIs with the characteristics of SMART (See highlighted letters in list above) that are designed to measure key processes or functions within an organisation or a program are classified as Key Performance Indicators (OAGA, 1999 and PBM SIG, 2001b). Key Performance Indicators (KPIs) are those PIs that are used to measure the critical success factors of an organisation (PSMO, 1997; Burby, 2005; Reh, 2005; and Wikipedia 2006). They provide comprehensive information about strategic or key areas of an organisation or a process. That is, KPIs provides more detail information about key functions/processes than general PIs and are vital to decision-makers when it come to recapitalisation and reengineering. Although, PIs may have their drawbacks when it comes to measuring the qualitative aspect of an SDI their other useful qualities do makes them applicable to SDI evaluation. If PIs and in particular KPIs for SDI evaluation are developed with the characteristics listed above, then, their value as a tool for providing more useful information to address recapitalisation and reengineering of SDI will be improved tremendously. However, for PIs—inclusive of KPIs—to have significant impact on SDI evaluation they ultimately must be designed based on the complexity of an SDI and not just be implanted from other industries. The previous statement points to the need for a guide to assist the SDI community in the development of SDI specific PI indicators for evaluation. This guide, possible in the form of a framework for the development of PIs for SDIs should include the variables that contribute to the complexity of an SDI and also, fit within process 3 of the PBM style (See figure 1). Developing PIs for SDI Evaluation Increasingly, the financiers of SDIs are demanding more and more that PIs (from here on the term PIs refers to both general PIs and KPIs) are included in the business plan of an SDI. PIs have now become one of the main criteria for leveraging funds from both public and private sector for SDI implementation and maintenance. This is evident from their inclusion in the business plans used by SDI coordinating bodies in the leveraging of funds for implementation of the next generation of SDIs (please see Table 1). Table 1 provides an overview of organisations within the SDI community that have published their activities in relation to the development of performance indicators. These organisations have either developed performance indicators for the evaluation of their SDI or are in the process of investigating the application of performance indicators in SDI evaluation.

Table 1: A Snapshot of some of the Performance Based Activities initiated by members of the SDI community. Organisation Performance Based Management Activities Developed a Logic Model with outcomes and GeoConnections (Canada) performance indicators to measure them. Part of their business plan used to leverage funds from the Federal Government. Adopted the Federal Enterprise Architecture Federal Geographic Data ‘Performance Reference Model’ to measure the Committee (U.S.A.) performance of their program. This was a requirement to continue leveraging Federal funding for the next phase of the NSDI. Australia New Zealand Land In their 2004-2005 report listed twelve outcomes of Information Council (Australia the Australian SDI. and New Zealand) Included performance indicators in their Strategic Public Sector Mapping Agencies Plan 2002-06 and also in their reports to their Board (Australia) of Directors (Paull, 2004). Ruimte voor Geo-Informatie Invested in research on the “Development of a (Netherlands) Framework to measure the Performance of NSDIs” (RG117) Sponsors research into economic issues associated GEOIDE (Canada) with SDI implementation, as well, organise workshops on the evaluation of SDIs (2 in the last 6 months) Organised a number of educational activities on the European Commission Joint economic issues of an SDI. The latest being a Research Centre workshop on measuring the return on investment of an SDI. MACGA with the support of the Centre of Property Meso-American and Caribbean Studies University of New Brunswick organised Geo-Spatial Alliance (MACGA) workshops throughout the Caribbean that introduced the concept of developing performance indicators for both SDI and GIS evaluation. Global Spatial Data Infrastructure Organised a number of conferences and workshops (GSDI) which addresses the economic issues associated with SDI implementation. Infrastructure for spatial In 2003, INSPIRE carried out an impact assessment information in Europe (INSPIRE) on the implementation of a European wide SDI.

Table 1, although not a complete listing, serves the function of providing the readers with a snapshot of PI related activities within the SDI community. From the table it can be seen that the wider SDI community now recognises the need for PIs in SDI evaluation. Also, the efforts underway in North America, Europe, Australia, and the Caribbean to use PIs in SDI evaluation

signals the community’s mind set to improve SDIs evaluation. As mentioned before this is an important step towards developing an efficient methodology to assist in the recapitalisation and reengineering of SDIs. Research performed by the Author on SDI evaluation indicates that although these efforts are mammoth steps in the right direction, the results of these efforts—PIs developed from the initiatives listed in Table 1—are not detailed enough to have a significant impact on SDI evaluation. This is because the PIs developed so far by the SDI community are not sufficiently quantifiable nor or they detailed enough to be classified as key performance indicators (KPIs). This is important since, any meaningful evaluation of an SDI will require that KPIs are available. In that, the information provided by KPIs will definitely be required by SDI stakeholders to ensure that more cognitive decisions on recapitalisation and reengineering of SDIs are made. The type of information produced by KPIs facilitates the removal of subjective decision-making from SDI implementation. Therefore, it is vital to the success of the next generation of SDIs that KPIs are developed along with general sets of PIs as components of the business plans. Developing PIs specifically KPIs for SDI evaluation is an ominous task due to the complex nature of an SDI. The complexity of an SDI was investigated, defined and explained by authors such as Coleman and McLaughlin, 1997; Rajabifard et al., 1999; Chan, 2001; Rajabifard, 2002; Williamson, 2002; Giff, 2005; van Loenen, 2006; De Man, 2006 and Grus, 2006. These authors researched the nature of SDIs in order to have a better understanding of the process, so that, information gained from the researches could be used to improve implementation. Coleman and McLaughlin (1997 and 1998), Chan (2001), Rajabifard (2002), and Williamson (2002) investigated the complexity of SDIs in order to develop a structure for their definition and classification. While, Giff (2005), van Loenen (2006), De Man (2006) and Grus (2006) investigated SDI complexity in order to facilitate the development of a framework for their evaluation and financing. Although, the investigations were carried out for different proposes the authors all agreed that SDIs are complex in nature. SDIs are complex, in part, due to the integration of their key components (i.e., the institutional framework, the standards, the human resources, and the datasets). This integration of what De Man (2006) describes as socio-technical assemblies along with the activities within the implementation environments (e.g., culture, political and economical issues), the need for hierarchical integration, as well as, horizontal integration across different political administrative boundaries, and its dynamic nature gives an SDI all the characteristics of a complex process described by Eoyang, (1996) and Cilliers, (1998). Therefore, if an SDI is seen as a complex process then its performance (i.e., its outputs, outcomes, and impact on the society) must be complex as well (De Man, 2006). The challenge therefore to the SDI community is to develop PIs capable of measuring the complex performance of an SDI. That is, the development of PIs to capture the direct qualitative and quantitative performance of SDIs, as well as, the positive externalities produced by SDIs. The paper already highlighted the difficulties involved in developing quantitative PIs for the qualitative performance of an SDI. However, this intricacy is further increased when the performance to be measured are qualitative externalities of the SDI. Consequently, PIs to assist in the comprehensive evaluation of an SDI must incorporate in their design variables to address the complexity of SDI performance (e.g., the qualitative nature of the outcomes and the externalities associated with the operation of an SDI). In summary, if SMART PIs (i.e., PIs that are specific, measurable, attainable, realistic and timely [PBM SIG, 2001b]) are

to be develop to evaluate SDIs then the designers of these PIs must have clear understanding of the intricacy of an SDI which should be included in the framework used to develop these SMART PIs. Towards a Conceptual Framework for Developing PIs for SDIs The creation of PIs for evaluation calls for the usance of methodologies that includes clearly designed logical steps. These logical steps may be viewed as a series of logic flow models that are tailored to suite the organisation’s functions and activities, and the purpose of the evaluation (GSA, 2000). That been said, a conceptual model (i.e., a framework) can be created that includes universal concepts, principles and activities that are generally used in the design of PIs (Kanungo et al., 1999 and GSA, 2000). This “framework” would be in part high-level and would require fine-tuning by individual organisation before actual execution. Applying the above theory to SDI implies that in designing a framework to develop PIs for SDI evaluation the traditional methodologies for developing SMART PIs may be adopted. The author explored this hypothesis and concluded that using analogies with other infrastructures and organisations producing public goods a conceptual framework for the development of PIs for SDI evaluation (here after referred to as The Framework) can be formulated. However, applying this evaluation concept to SDI will not be as straightforward as it is to apply it to other sectors due to the complex nature of an SDI (GSA, 2000). This implies that the methodology(s) selected to be included in The Framework must be capable of encompassing knowledge of the complexity of an SDI. That is, the frameworks used in other sectors must be customised to cope with the complex and long-term performance of an SDI. This can be achieved by injecting knowledge of the complexity of an SDI into the selected generic framework(s) at the appropriate points so that the PIs produced are sensitive to the complex performance of an SDI. The purpose of this conceptual framework (i.e., The Framework) would be to serve as a guide to the SDI community in the development of PIs. It should be noted that the application of The Framework to the development of PIs for a specific SDI would require customisation based on the definition, objectives, functions and implementation environment of that particular SDI. Also of importance is the fact that such a framework should be developed within the broader context of a PBM program. A framework for the development of PIs for SDI should encapsulate the principles of PBM so that the PIs produced are capable of articulating the relationship amongst inputs, outputs, outcomes, and impact; enhancing performance information; and indicating areas within the process that needs improvement. Framework Outline For the initial design The Framework focused mainly on supporting the development of PIs to measure the efficiency and effectiveness of an SDI. That is, the development of PIs to provide the information necessary to demonstrate the relevance of SDIs in today’s Information Society. Measuring these two operational qualities of an SDI poses two key problems that must be considered in the design of The Framework. Firstly, outcomes and impacts of SDIs tend to be qualitative, therefore, PIs designed must be capable of representing these values in a statically

useful manner. Secondly, the outcomes and impacts are usually long-term—three to five years minimum—and should be factored in The Framework. The preliminary work done by the author on the formulation of The Framework reveals that a schema of this nature should include at least nine fundamental steps. These basic steps were designed using analogies from infrastructure projects and other programs that facilitate the production of public goods. These steps should inevitably attempt to capture the unique relationships amongst the inputs, outputs, outcomes, and impact of an SDI (See Figure 2). Figure 2 illustrates the relationship amongst a program’s inputs, outputs, outcomes, and impact within the context of efficiency and effectiveness. In the Figure the broad arrows are used to show the relationship between aspects of the projects. The change in the nature of the lines of these arrows from— straight lines to dash lines—indicates the fuzziness in the relationship. The figure also indicates that a measure of efficiency is achieved via the relationship between inputs and outputs; whilst effectiveness is determined through the relationship between outs and outcomes. Also possible but more difficult to measure is the level of effectiveness of the program resulting from the relationship between impacts and outcomes. However, it is important that impact is measured in the case of an SDI because of the expected long-term interaction of the SDI with the community.

Inputs

Project’s objectives, goals, mission, etc. Effectivenes s

Efficiency

Project

Outputs

Project activities

Effectivenes s

Impacts

Outcomes

Interaction with users

Interaction with society

Figure 2: Performance Flow of a Program (adopted from GSA, 2000 and Audit Commission, 2000) The Framework to be presented in this section is only preliminary, in part, due to the fact that the author is yet to complete the research on the development of conceptual objectives for SDIs and the identification of the key performance areas of an SDI (i.e., Process 2 of the PBM style). In an attempt to encapsulate the relationships in Figure 2 and the initial results of the research—on performance framework into an SDI evaluation process, the author formulated the following steps to form the skeleton of the preliminary Framework (See Figure 3):

1. Based on the objectives (program level and strategic), the key performance areas and the purpose of the evaluation decide on what aspect of the program is to be measured. For the preliminary Framework the need to determine efficiency and effectiveness of an SDI was chosen as the purpose of the evaluation. 2. Identify the main activities and functions of the key performance areas of the program 3. Clearly define the expected outputs, outcomes and where possible impacts. Also, in this stage decisions on milestone targets and measures can be made. 4. Develop a list of Efficiency Indicators. In this phase, the aim is to determine whether or not the program is operating optimally. Efficiency is measured mainly through the relationship between inputs and outputs. The indicators in this category should therefore be capable of demonstrating the amount of input units that are involved in the production of a specified output. In terms of an SDI some of the challenges in developing this type of PI are defining the monetary inputs, and defining what is to be classified as output. For example, is the clearinghouse an output or is it the datasets facilitated by the clearinghouse the output? Other variables the PIs measuring efficiency should attempt to encapsulate are users’ satisfaction level, the effects of externalities, and the effects of the monopolistic nature of an SDI. 5. Select KPIs from the list of efficiency PIs developed in the previous step. 6. Develop a list of Effectiveness Indicators. Effectiveness represents the influence the outputs are having on the users and to a lesser extent its impact on the wider community. For an SDI, it is expected that the PIs in this category will be more qualitative than quantitative. An example of a quantitative PI for outcome is the percentage of users who found the datasets they were looking for via the clearinghouse. While a qualitative PI for outcome could be the level of satisfaction a user derives from the metadata provided by a data supplier. The development of PIs in this category will require extensive investigation into the medium to long-term effects of an SDI on the society. 7. Select KPIs from the list of effectiveness PIs developed in the previous step. 8. Evaluate the PIs to determine whether or not they can pass the SMART test 9. Investigate whether PIs are cost effective to implement The above nine steps—components of the preliminary Framework—can only serve as a skeleton for production of SDI oriented PIs. In that, PIs developed using this Framework will not possess all the characteristics necessary to satisfying the different categories—minimum three categories—(a set of PIs for efficiency, a set of PIs for effectiveness with respect to outcomes and a set of PIs for effectiveness with respect to impacts) of PIs required for SDI evaluation. Therefore, the application of the above nine steps to the development of a particular set of PIs would require the inclusion of additional variables—more specific to the set in question—that would facilitate the capturing of the additional data necessary to improve the quality of the PIs (Figure 3).

PIs 2 PIs 1

Objectives

Activities/ Functions

Outputs

Applications

Outcomes

Externalities

Users Impacts Applications

PIs 3

Community Implementation Environment

Figure 3: Key Variables and Processes Involved in the Development of PIs for SDI Evaluation Figure 3 is a schematic representation of some of the processes and variables involved in the design of PIs required to measure the efficiency and effectiveness of an SDI. In the Figure, the set of PIs classified as PIs1 are used to measure efficiency. This is probable the most straightforward sets of PIs to measure as the relationship between inputs and outputs can easily be quantified. A more complex set of PIs to design are PIs2, the set of PIs used to measure effectiveness in terms of outcomes and their relationship with users and the society. For an SDI outcomes are qualitative and therefore, make the design of PIs in this category more difficult. Also of interest to this category are externalities generated due to the monopolistic nature of an SDI. Externalities generated by an SDI will further complicate the design process of these PIs. Figure3 also provides an overview of some of the interesting variables influencing the design of PIs3 (i.e., A set of PIs used to measure effectiveness in terms of impacts). These variables are those that influence the level of impact an SDI will have on the community and also, those that will react to these influences. Another schematic representation of key processes and variables involved in the design of PIs for SDI evaluation can be seen in Table 2.

Table 2: A list of variables used in the development of PIs Variables used in the development of PIs to Measure efficiency Goals/Objectives Inputs Outputs Efficiency PIs

Inputs

Outputs

Externalities

Efficiency PIs

Variables used in the development of PIs to Measure effectiveness Functions/Activities Outputs Outcomes Effectiveness PIs

Outputs/Applications

Outcomes

Impacts

Effectiveness PIs

Outcomes

Impacts

Externalities

Effectiveness PIs

Table 2, a component of the preliminary Framework serves the purpose of facilitating the tabulation of details regarding the variables that should be studied when developing PIs. The tabulation of these variables is an important aspect of The Framework as it allows for the collection and structuring of decisive information on the variables. Therefore, an analysis on a table of this nature should provide designers of SDI’s PIs with information that should greatly enhance their ability to produce good quality SMART PIs. At present Table 2 is only a preliminary schema and for it to be more effective additional research is definitely required to fill in the gaps that currently exist. It should be noted however, that a conceptual framework of this nature is not the end all to developing PIs and will have its limitation. That been said, if PIs are developed in a collaborative manner using The Framework as a guide and within the concept of a PBM style, while bearing in mind the following rules of thumb: there is no one stop PI for evaluation, minimum numbers of PIs are most effective and PIs evolve through time then the results should be effective PIs capable of assisting in the evaluation of an SDI. Further Research The Framework presented in this paper is the preliminary result of research in progress on the evaluation of SDIs. The initial research also indicates that for the evaluation of an SDI to be effective or have meaning SDIs must be managed within the paradigm of a PBM program. Therefore, research on SDI evaluation should cover this concept. In this context the author recommends that the next phase of the research should be a continuation of the investigation into process1 (See Figure1) with an aim at developing conceptual objectives for SDIs. These objectives can then be used in the development of performance frameworks and PI Frameworks. Additional research is also necessary in process 2 to provide the SDI community with additional knowledge of the areas critical to the success of an SDI. It is expected that the results of research carried out in process 2 will facilitate the

identification of critical areas/components of an SDI or at least a framework to identify these areas. In addition, the author recommends that further research be carried out by the SDI community on the development of a performance framework, a framework for the development of PIs for an SDI and a boarder framework for SDI evaluation that encompasses all the other frameworks. Within this context the problem of transforming qualitative PIs to quantitative PIs should also be explored. Conclusion The paper presented preliminary results on research into possible techniques available for SDI evaluation. Preliminary results were presented because the author is of the opinion that the SDI community needs to increase its awareness level of SDI evaluation at this point in SDI development. The research established that there is a need to evaluate SDI not only to justify expenditure on their implementation but also to determine whether or not they are achieving their objectives. The need for a performance oriented management program as a part of the evaluation process was also observed. The paper reviewed the application of the PBM style as possible solution and based on its characteristic highlighted by the paper it can be concluded that it is a suitable tool for enhancing the environment for SDI evaluation. From the review presented on PIs and their value in the evaluation process it is reasonable to foresee PIs playing a significant role in the evaluation of future SDIs. This is evident from the characteristics identified in the paper that are favourable to infrastructure evaluation and in particular SDIs. In support of this, the paper presented a Framework for developing PIs for SDI evaluation. Although, The Framework is only preliminary it provides the readers with an insight to the steps and intricacy involved in the design of PIs for SDIs evaluation. In concluding, it must be stated that similar to an SDI the development of indicators to evaluate its performance is also complex. The paper only scratches the surface in identifying some of the key variables that contributes to the complexity in the design of PIs for SDI evaluation. Therefore, it is advised that the SDI community performs more in-depth studies on PIs in order to develop a more comprehensive Framework to act as a guide for the development of PIs to evaluate current and future SDIs. Acknowledgements The author would like to acknowledge the following persons for their editorial contributions to the paper: Joep Crompvoets of Wageningen University, Bastiaan ven Loenen of Delft University of Technology and Jaap Zevevbergen of Delft University of Technology. The author also expresses acknowledgement to the following organisations for their financial support of the research: OTB Research Centre, Delft University of Technology and the Dutch Bsik Programme ‘Space for Geo-Information (RGI-005)’.

References Audit Commission, 2000, On Target: the practice of Performance Indicators. Audit Commission Publication, (Audit Commission-Bookpoint Ltd, London, UK) Blalock, Ann Bonar, 1999, Evaluation research and the performance management movement. In Evaluation, 5(2), pp. 245–258. Bullen Paul, 1991, Performance Indicators: New management jargon, political marketing, or one small element in developing quality services. In Caring 1991, September Issue Burby, Jason, 2005, ‘Defining Key Performance Indicators.’ ClickZ Network http://www.clickz.com/showPage.html?page=3527431 (CHN) Child Health Network for greater Toronto 2001 “A Performance Evaluation Framework for the Child Health Network: Background discussion Paper. A CHN discussion paper, Toronto, Ontario, Canada (CMIIP) The Committee On Measuring and Improving Infrastructure Performance, 1995 Measuring and Improving Infrastructure Performance. (National Academy Press Washington, D.C. 1995) Chan, T., 2001, The Dynamic Nature of Spatial Data Infrastructure: A Method of Descriptive classification. Geomatica 55(1) pp 65-72 Cilliers, Paul, 1998, Complexity and Postmodernism, Understanding complex systems. (Routledge, London, England) Coleman, D. and McLaughlin, J., 1997, Defining Global Geospatial Data Infrastructure (GGDI): Components, Stakeholders And Interfaces. Proceedings of GSDI 2, Chapel Hill, USA., 20-21 October 1997 Coleman, D.J. and J.D. McLaughlin, 1998, Defining Global Geospatial Data Infrastructure (GGDI): Components, Stakeholders and Interfaces.” Geomatica, 52(2), pp. 129-143. Crompvoets, J., Bregt, A., Rajabifard, A. and Williamson, I., 2004, Assessing the worldwide developments of national spatial data clearinghouses. International Journal of Geographical Information Science 18(7), pp 665-689 Daniel L Paull (2004) Spatially enabling Australia through collaboration and innovation. A PSMA Australia Limited Report, Griffith, ACT, Australia. Available at http://www.psma.com.au/resources/spatially-enabling-australia-through-collaborationand-innovation De Man, Erik W.H. 2006. Understanding SDI : complexity and Institutionalization. International Journal of Geographical Information Science, 20(3), pp329-343.

Environment Canada, 2000, manager’s Guide to Implementing Performance-based Management. An Environment Canada report. Ottawa, Ontario, Canada Eoyang, G., H., 1996. A Brief Introduction to Complexity in Organizations. Chaos Limited Circle Pines, MN. Available at http://www.chaoslimited.com/A%20Brief%20Introduction%20to%20Complexity%20in%20Organizations. pdf. Last accessed August 2006 (GSA) General Services Administration Office of Government wide Policy, 2000, PerformanceBased Management: Eight Steps To Develop and Use Information Technology Performance Measures Effectively. A General Services Administration Office of Government wide Policy report, Washington, DC, USA Giff, Garfield and Coleman, David, 2002, Funding Models for SDI Implementation: from Local to Global. Proceedings of GSDI6 conference on SDI, Budapest, Hungary, Sept. 2002 Giff, G and Coleman D., 2003a, Spatial Data Infrastructure Developments in Europe: A Comparative Analysis with Canada. A GeoConnections Report, GeoConnections, Natural Resources Canada, Ottawa, ON, Canada Giff, G and Coleman D., 2003b, Financing Spatial Data Infrastructure Development: Examining Alternative Funding Models. In Developing Spatial Data Infrastructures: from concept to reality. (Eds.), Williamson I., M.E. Feeney and A. Rajabifard. Taylor & Francis, London UK, pp.211-233 Giff, Garfield (2005). Conceptual Funding Models for Spatial Data Infrastructure Implementation, Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in the Graduate Academic Unit of Geodesy and Geomatics Engineering, University of New Brunswick, Fredericton, New Brunswick, Canada. March 2005 Grus, Lukasz, (2006). National Spatial Data Infrastructures as Complex Adaptive Systems The first step into assessment framework. Thesis submitted in partial fulfilment of the degree of Master of Science at Wageningen University and Research Centre, Wageningen, The Netherlands Hale, Judith, 2003, Performance-Based Management: What Every Manager Should Do to Get Results. ( ISBN: 0-7879-6036-5 Hardcover, November 2003, Pfeiffer) Jolette, Denis and Manning Ted, 2001, Developing Performance Indicators for Reporting Collective Results. A Treasury Board of Canada Secretariat report. Available at http://www.tbs-sct.gc.ca/rma/eppi-ibdrp/hrs-ceh/4/DPI-EIR_e.asp Kok, B. and Loenen, v., B., 2005. How to assess the success of National Spatial Data Infrastructures? Computers, Environment and Urban Systems, 29, pp 699-717

Kunungo, S., Duda, S., and Srinivas Y., 1999, A Structured Model for Evaluating Information System Effectiveness. System Research and Behavioural Science, 16, pp 495-518 Lawrence, D., 1998, Benchmarking Infrastructure Enterprises. In Infrastructure Regulation and Market Reform, (Eds.) Australian Competition and Consumer Commission and the Public Utilities Research Centre, AGPS, Canberra. Masser, Ian, 1998, The first Generation of National Geographic Information Strategies. Selected Conference Papers, 3rd GSDI Conference, Canberra, Australia, 17-19 November 1998. http://www.eurogi.org/gsdi/canberra/masser.html (Last accessed 15 November 2004.) McNamara, Carter, 1999, Performance Management: Performance Plan. Free Management Library, Available at: http://www.mapnp.org/library/perf_mng/prf_plan.htm. (NPR) National Partnership for Reinventing Government, 1997, Serving the American Public: Best Practices in Customer-Driven Strategic Planning. Available at: http://www.orau.gov/pbm/documents/documents.html. (OAG) The Canadian Office of the Auditor General, 1995, The 1995 Report of the Auditor General of Canada, Chapter 14 See OAG Report 1995, Chapter 14, reference section 14.64. (OAGA) 1999 OAG Audit Standard: The Audit Of Performance Indicators. An OAGA report, West Perth, Australia. Available at http://www.audit.wa.gov.au/pubs/ASD_2-99_PI_9899.pdf (OPM) Office of Public management New South Wales, 1990 Onsrud, Harlan , 2000, ‘Survey of National and Regional Spatial Data Infrastructure Activities Around the Globe.’ A report on the status of SDI initiatives around the world, University of Maine, Maine http://www.spatial.maine.edu/%7eonsrud/GSDI.htm. Last accessed August 2006 (PBM SIG) Performance-Based Management Special Interest Group, 2001, The PerformanceBased Management Handbook Volume 1:Establishing and Maintaining a Performance-Based Management Program. (U.S. Department of Energy and Oak Ridge Associated Universities). (PBM SIG) Performance-Based Management Special Interest Group, 2001a, The PerformanceBased Management Handbook Volume 2:Establishing an Integrated Performance Measuring System. (U.S. Department of Energy and Oak Ridge Associated Universities) (PSMO) Public Sector Management Office Western Australia, 1997. Preparing Performance

Indicators: A Practical Guide. A Public Sector Management Office publication, Perth, Western Australia. Available at http://www.audit.wa.gov.au/reports/performanceindicators.html Rajabifard, A., Chan, T.O. and Williamson, I.P. 1999, The Nature of Regional Spatial Data Infrastructures, Proceedings of AURISA 99, Blue Mountains, NSW, Australia

Rajabifard, Abbas (2002). Diffusion of Regional Data Infrastructures: with Particular Reference to Asia Pacific. Ph.D. Thesis, Department of Geomatics, The University of Melbourne, Melbourne, Victoria, Australia Reh, John (2005) “Key Performance Indicators.” In About Magazine Stewart, Craig (2006) Results-based Management Accountability Framework. Paper presented at the GEOIDE/GeoConnections Workshop on Value/Evaluating Spatial Data Infrastructures. Ottawa, Ontario, Canada (TRADE) Training Resources and Data Exchange, 1995, How to Measure Performance: A Handbook of Techniques and Tools. A US Department of Energy Defence Program Report. (Oak Ridge Associated Universities, Oak Ridge, TN, USA) Van Loenen, Bastiaan, 2006, Developing geographic information infrastructure : the role of information policies. Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy, Delft University of Technology, Delft, The Netherlands Wikipedia Encyclopedia ,2006, ‘Key Performance Indicators.’ August 2006, http://en.wikipedia.org/wiki/Key_performance_indicators Williamson, Ian, 2002, Land Administration and Spatial Data Infrastructures Trends and Developments. In the proceedings of Implementation Models of Asia and the Pacific Spatial Data Infrastructure (APSDI) Clearinghouse Seminar, Negara Brunei Darussalam (WHO) World Health Organization, 2000, Tools for assessing the O&M status of water supply and sanitation in developing countries’ (World Health Organization Geneva, Switzerland)

Suggest Documents