ATM KEY PERFORMANCE INDICATORS CASE STUDY Chafik OKAR, Razane CHROQUI
Ata GHALEM
LAMSAD, Higher School of Technology Univ Hassan 1st Berrechid, Morocco.
IMII, Faculty of Sciences and Techniques Univ Hassan 1st Settat, Morocco.
[email protected]
Abstract—This paper presents a state of the art of the performance indicators of the Air Traffic Management (ATM) system. It reviews the key performance indicators (KPIs) proposed by different aviation organizations, in order to provide an objective comparison. Since Morocco is a member of the three organizations, this paper discusses its KPIs, which are under the on-going performance measurement process, in the Air Navigation Pole of Casablanca. Keywords—performance; indicator; ATM; measurement;
I.
INTRODUCTION
Performance is a worl-wide known concept that attracts a lot of attention. Yet, it is a complexed concept difficult to define, for it is a multidimensional and polysemous. Perfromance is related to many concepts like management, assessment, evaluation, reviewing, monitoring and measurement. This paper presents, in the first section, a quick view on the definition of performance, its measurement and the notion of indicators. The second section is destined to present the state of the art of the ATM performance indicators of two major organizations: Internatioanl Civil Aviation Organization (ICAO) and European Union (EU). Then discuss the state of the ATM performance indicators of the Moroccan Kingdom. II.
PERFORMANCE MEASUREMENT
A. The concept of perfromance ‘Performance’ is a concept that accepts multiple definitions. It seems that each scholar has his own conception of the term and no standard definition is ever available. The conceptualization of performance depends not only on the fields of research and the concepts related to the term, but also on the needs and viewpoints of the scholars. From the many definitions proposed by the scholars, Bourguignon (1997), for example, identifies in the field of accountability management three main senses. We quote: “ 1) performance is success, performance doesn’t exist by itself; it is a function of representations of success, variable according to enterprises, according to actors; 2) performance is the result of action, oppositely to the previous; this sense doesn’t have a value judgment.
Performance measurement is “understood as the ex post evaluation of results” Bouquin (1986); 3) performance is action. In this sense performance is a process and “ not a result that appears in a moment in time” Baird (1986).” Bourguignon (1997) While Otley (2001) defines the term differently, He considers performance in the context of a business or public sector. He argues, “It is the public sector that gives a useful start in its use of the three ‘E’s’ of performance, namely: o o o
Effectiveness [delivering desired outputs, and even outcomes] Efficiency [using as few inputs as possible to obtain these outputs]
Economy [buying inputs as cheaply as possible]” Otley (2001).
It seems that scholars will never settle for one standard definition of performance. Still, we believe that performance has two dimensions: dynamic and static. The dynamic dimension is related to the changeable nature of performance depending on the context/field/concepts. This dynamism is expressed through the multiple and variable characteristics describing performance, such as: competence, stakeholder satisfaction, competitiveness, creation of value, profitability/budget-ability, innovation, quality, success, strategy, progress, productivity, economy, etc. The s static dimension, which contains the basic elements. Is defined, when an organization attains the objectives set (effectiveness), optimizing the resources used to achieve desired results (efficiency), uses the adequate resources to attain the objectives set (Relevance), and, evidently, makes sure of the consistency of all actions taken (Coherence). B. Performance measurement and performance indicators The performance measurement is also known as a broad topic, which is not easy to define. According to Neely et al. (1995) Performance measurement can be defined as “the process of quantifying the efficiency and effectiveness of
action”. And a performance measure can be defined as “a metric used to quantify the efficiency and/or effectiveness of an action”.
Bourne et al. (2003) state that, “Performance measurement (as promoted in the literature and practiced in leading companies) refers to the use of a multi-dimensional set of performance measures”. The basic reality to remember about the performance measurement is that it’s an act of quantification. The broadness of performance measurement is represented also in its use as a system. In fact, according to Neely et al. (1995) the performance measurement system can be defined as the set of metrics used to quantify both the efficiency and effectiveness of actions.
Moreover, Wettstein and Kueng (2002), classify the Performance Measurement System (PMS) as an information system, and they define it as “a system that tracks the performance of an organization (or part thereof), supports internal and external communication of results, helps managers by supporting both tactical and strategic decision-making, and facilitates organizational learning”. Many frameworks have been developed to study the measurement of performance. Neely et al. (2007), for instance, state that multiple performance measurement frameworks exist, such as: the Balanced Scorecard, business excellence model and the Activity-based costing. However, they claim to define an alternative framework, which unifies the existing measurement frameworks and builds upon their individual strengths, their framework is named the ‘performance prism’ framework. According to Neely et al. (2007) “the performance prism makes an important distinction between stakeholder satisfaction – what the stakeholders want of the organization – and stakeholder contribution – what the stakeholders contribute to the organization”. Likewise, According to Kanji (2002), the Balanced Scorecard (BSC), which is a performance measurement framework originally presented by Kaplan and Norton (1992), is “a comprehensive performance summary that combines financial and non-financial measures. Which, can be grouped into four main perspectives: a financial perspective, customer perspective, internal business perspective, and innovation and learning”. No matter which framework is used, it will be based on measures, which are expressed through indicators. Thus, in order to measure performance, quantitative or qualitative indicators are needed. Researchers define performance indicators in many ways. According to Berrah (2013) “Performance indicators are piloting tools whose purpose is the ratio, in the broad sense, of the states reached to the expected states”. Whilst, Fortuin (1988) defines them as “a variable indicating the effectiveness and/or efficiency of a part or whole of the process or system against a given norm / target or plan”. In fact, an indicator is more needed in practice, for will translate the relation between the results, the resources and the objectives set. It is seen as the ratio of the output to the input. In the next section, we will explore the performance measurement in the Air Navigation domain and list and compare the different performance indicators used in this field, which are directly described as a way of quantification.
III.
ATM PERFORMANCE
Before ATM performance has known a great evolution in the last years, together with the evolution of the Air Navigation Systems (ANS). In fact, the ICAO’s (International Civil Aviation Organization) performance framework has grown to be a Performance Based Global Air Navigation Systems with ASBU Methodology. In fact, before 1992, the system was known as the Ground-based Air Navigation system. Then, in order to answer to the civil aviation growth, it evolved to be the Future Air Navigation Systems (FANS). Later in 1994, ICAO transitioned to the communications, navigation and surveillance/air traffic management (CNS/ATM) systems. In 2006, the system became mature enough to be considered as a Global ATM system and then, in 2008, it was ready to be the Performance-based Global Air Navigation System. Since the civil aviation sector is always on continued and nonstopping roads of developments, the system was enhanced by the addition, in 2012, of the Aviation System Block Upgrade (ASBU) methodology. To be exact, it was in September 2003, that the Eleventh Air Navigation Conference urged ICAO to develop a performance framework for Air Navigation Systems. Consequently, ICAO developed the Performance-Based Approach (PBA) framework, which is based on three principles as explained in the Manual on Global Performance of the Air Navigation System (Doc 9883/2009): Strong focus on desired/required results through adoption of performance objectives and targets: which focused of specifying desired/required performance instead of solutions. The Management attention is shifted from a resource and solution centric view (how will we do it) towards a primary focus on desired/required (performance) results (what is the outcome we are expected to achieve)
Informed decision-making, driven by the desired/required results: which means working “backwards” from the “what”—the primary focus—to decisions about the “how”.
Reliance on facts and data for decision-making: which is based on the fact that “if you can’t measure it, you can’t manage it”, i.e. unless you measure something you don't know if it is getting better or worse. The PBA is a way of organizing the performance management process. This process is explained step-by-step in Doc 9883/2009. The first step of the process defines the scope, the context and the general ambitions/expectations. The expectations of the ATM community are specified in Doc 9854/2005 and are categorized into 11 Key Performance Areas (KPAs): safety, security, environmental impact, cost effectiveness, capacity, flight efficiency, flexibility, predictability, access and equity, participation and collaboration, interoperability. The second step sets the objectives. The third step describes the quantification of these objectives, which should be SMART (Specific, Measurable, Achievable, Relevant and Time-bound). This quantification is translated by the measurement of the performance objectives that triggers the need for defining: Performance Indicators or
Key Performance Indicators underpinning those indicators.
(KPIs)
and
the
metrics
This section states the different KPIs defined by the ICAO in the different KPAs, then compares them to the ones defined by the European Union (EU) and the EUR Region. Then as a case study, we will review the current KPIs measured in the Moroccan Air Navigation System and discuss the future plans related to the Moroccan performance framework. A. ATM performance indicators In this subsection, we will emphasis how the ATM performance indicators are mainly studied on different levels: Global level by ICAO, Regional level (example of the EUR REGION) and European level by the EU. Note that there are other organizations that studied the ATM performance like FAA (Federal Aviation Administration) and CANSO (Civil Air Navigation Services Organization), but they will not be considered in the comparison of this paper. 1) Global level: ICAO According to Doc PBTG/2007, the Performance Indicators are defined “in order to quantify the degree to which performance objectives are being, and should be, met. When describing performance indicators, one must define what and how measurements will be obtained (through supporting metrics) and combined to produce the indicator”. Also in the Doc9883/2009 part I, performance indicator is define as following: “the current/past performance, expected future performance (estimated as part of forecasting and performance modelling), as well as actual progress in achieving performance objectives is quantitatively expressed by means of indicators (sometimes called key performance indicators, or KPIs)”. It is also stated in Doc9883/2009 that indicators are not often directly measured. They are calculated from supporting metrics according to clearly defined formulas, e.g. cost-per-flight-indicator = Sum (cost)/Sum (flights). Thus a Supporting metrics are defined to calculate the values of performance indicators. While, the values of performance indicators that need to be reached or exceeded to consider a performance objective as being fully achieved are defined as Performance target. The indicators proposed by ICAO, presented in the table 1 below, are based on a comparison of ATM performance indicators for two organizations. According to ICAO, some of the indicators yielded from the investigation were not identical, however, some others were commonly used in both the compared organizations. In the case where no common indicators were found in the two organizations, a few examples are presented from a third source. In fact, states are invited to measure their performance, but no mandatory KPIs are required, for the proposed KPIs are more of guidelines in measuring performance. Instead, states are encouraged to follow the KPIs examples and to define their own appropriate KPIs within the areas defined by ICAO (11 KPAs). The final aim is to be able to develop a performance framework, in order to align with the performance-driven system of ICAO. Table 1: ICAO KPAs and KPIs (See appendix A)
2) Regional level: ICAO EUR REGION ICAO invited the states and the regions to adopt the PBA methodology and define their set of KPIs, in order to have a safer and more efficient system. This envisaged system would be attained through identified cost savings, reduction in waste of resources, more equitable charging practices, and more efficient provision of services. Thus, the EUR Region, which comprises 52 states, combined both the guidelines of ICAO and the regulations of the EU and developed the EUR Region Performance Framework Document (EUR Doc 030). This framework applies to a much larger geographical scope than the SES performance Scheme (UE framework, discussed in subsection 1.3 below), which applies to 29 states. The EUR Region decided to start with an initial framework composed of a subset of indicators, in order to facilitates the engagement of the states in the process of performance management. The objective of EUR Region is to define suitable KPAs, KPIs and a set of realistically measurable metrics. Therefor, EUR Region only chose 5 KPAs out of the ICAO 11 KPAs, which are: Safety, Capacity, Environment, CostEfficiency, and Participation by the atm community. Table 2: EUR Region KPAs and KPIs (See appendix B) 3) European level: EU The EU approaches performance management in a different way. Instead of a decision-making methodology (PBA), the EU proposes a performance scheme. In fact, the performance scheme is destined to “contribute to sustainable development of the air transport system by improving the overall efficiency of air navigation services” Regulation 390/2013. The EU decided that they could realize this objective through four KPAs, instead of 11 KPAs: Safety, Environment, Capacity and Costefficiency. Moreover, Eurocontrol and the European Commission have prepared a European Air Traffic Management Master Plan for gradually implementing the SES. In this plan, a list of Key Performance Indicators has been defined. In fact, the regulation 390/2013 identifies two types of indicators: Performance indicators (PIs) and Key performance indicators (KPIs). ‘Performance indicators’ means the indicators used for the purpose of performance monitoring, benchmarking and reviewing. ‘Key performance indicators’ means the performance indicators used for the purpose of performance target setting. According to both definitions, there are two purposes of performance indicators that are first monitoring/ benchmarking/ reviewing and second target setting. For the purpose of this paper, will be considered and compare only the KPIs, which have the purpose of target setting, with the ICAO and EUR Region. In fact, EU defines the KPIs for the target setting on two different levels: Union level (Union-wide targets) and local level (functional airspace block, national, charging zone and airport targets). Again, for comparison purposes only the KPIs on the Union level will be considered in this paper. Table 3: EU KPAs and KPIs (See appendix C)
4) KPIs comparison It is obvious that the organizations use different indicators. Moreover, it is important to note that ICAO as an international organization always provides states with general and widerange practices that are either mandatory or desirable. However, on a Regional or European level the range is more tightened and the initiative of the performance-driven system is still in its debut. The baby steps taken by the EU and the EUR Region are justified and reasonable. So the subset of the KPAs chosen is a humble yet a smart start, especially that the areas of safety, capacity, environment and cost-effectiveness are the most urgent. We present in the table 4 below a summary of the differences between the KPIs related to the only 4 KPAs chosen by EU on the Single European Sky (SES) level and the EUR Region, plus the KPA: Participation by the ATM community. Table 4: Summary of common KPAs and KPIs related (See appendix D) In the first KPA: Safety, we observed that the EU and EUR Region use similar KPIs both related to the Effectiveness of Safety Management and the severity classification (EUR Region adds the concept of just culture), which are different from the ICAO KPIs that measures the normalized number of accidents. In the KPA: Capacity, ICAO proposes measures related to flights on three levels and instead of en-route they use system-wide and airspace levels. EU and EUR Region both measure ATFM delays, except that EUR Region measures both en-route and terminal, while EU focuses only on en-route. In fact the terminal and airport measures of EU are considered as PI for monitoring purposes. When it comes to KPA: environment, ICAO proposes measures related to inefficiencies, noise and fuel efficiency. EUR Region differentiates between Efficiency and Environment and both flight efficiency and inefficiency are taken into consideration. However, in the EU KPIs only en-route measures are calculated, the terminal and airport measures are not considered even in the PIs, instead the PIs are related to the FUA (Flexible Use of Airspace) and CDRs (Conditional Routes). Furthermore, all three organizations use different measures and no similarities are observed in the KPA: Cost-effectiveness. In fact, ICAO focuses on the average ATM costs. While, EUR Region focuses on the CNS/ATM costs and adopts one of the EUROCONTROL () KPIs that is ATCO productivity. However, they didn’t calculate the KPI: employment cost and support costs. On the other hand, EU focuses on the all the ANS costs and not only the ATM or CNS/ATM costs. Moreover, EU proposes the determined unit cost (DUC) for en-route ANS as a KPI. Finally, the KPA: Participation by the ATM community that is not measured by the EU, as explained before EU choses only the 4 KPAs above. The EUR Region clearly adheres to the measures proposed by ICAO; they both focus on the participation of States and international organizations to planning and implementation meetings. The similarities and differences in the choice of indicators trigger many questions related to the choice of the suitable indicators. The first question to ask is: What indicators should a state choose if it is at the same time a member of ICAO, EU and CANSO? For example, a state of the EUR Region has the possibility to choose the suitable KPIs according to its needs,
for it follows both the ICAO and EU rules. But, How can it make the right choice? According to the 12th Conference EUR Region “decided to base its indicator proposals as much as possible on on-going processes and activities, therefore giving due consideration to the SES performance scheme”. But, their choice is based on the fact that it should facilitate the implementation of the performance framework, especially for the non-SES states, so they started by the easy measures that many states can use. However, we believe that the choice should be based on characteristics that serve a performance purposes. A state should choose the indicators that will provide, in the first place, a better view on the management of its performance, in other words, indicators that will help attain better performance.
B. Moroccan ATM perfromance indicators In the case of the Moroccan Kingdom, the performance measurement has been recently introduced to the Moroccan Air Navigation Services. According to the Air Navigation Pole (PNA) of Casablanca (which is the body responsible for delivering Air Navigation Services (ANS)), the performance measurement is currently an on-going process that hasn’t reached a maturity level yet. In other words, the PNA adopted the same EUR Region approach and started with the subset of indicators. Today, the PNA is in the middle of the process of measuring performance in the four KPAs proposed by EU, which are: Safety, Capacity, Environment and Costeffectiveness. Note that, the PNA is slightly advanced in the measurement of its safety KPIs compared to capacity and environment KPIs, while it is still in the very first stage of the measurement of its Cost-effectiveness KPIs. In fact, the Moroccan Kingdom signed an agreement with EUROCONTROL (European Organization for the Safety of Air Navigation) in April 2016, to improve the quality of service and the safety management systems within the PNA. According to PNA, Its strategy was strengthened in 2016 by the adoption of the PBA (since it is a member of ICAO) and the agreement with EUROCONTROL to shift to a management based on the performance and excellence of the ATM system. The fact that Morocco has to choose between all the indicators proposed by ICAO and EUROCONTROL makes it only a question of resources. Since the agreement with EUROCONTROL is signed, Morocco can take advantage of all the services offered by the agency and the expertise of the EUROCONTROL’s experts. This alone, will encourage the PNA to directly adopt the EUROCONTROL indicators. But, are they the appropriate measures that will help improve the global ATM performance? This paper is only a state of the art, so this question will be answered in our future study of the ATM performance indicators. However, in our view point the Moroccan ATM system needs an in-depth analysis of its performance indicators and its needs in terms of performance management.
References
[1]
[2]
[3]
[4] [5] [6] [7] [8] [9] [10]
[11]
[12] [13]
L. Berrah, “La quantification de la performance dans les entreprises manufacturières: de la déclaration des objectifs à la définition des systèmes d'indicateurs”. Habilitation à diriger des recherches, 2013. A. Bourguignon, “Sous les pavés la plage... ou Les multiples fonctions du vocabulaire comptable: l'exemple de la performance”. ComptabilitéContrôle-Audit, tome 3, vol. 1, mars, p. 89-101, 1997. M. Bourne, and A. Neely, “Implementing performance measurement systems: a literature review”. Int. J. Business Performance Management, Vol. 5, No. 1, 2003. Doc 030/2014: EUR Region Performance Framework Document. Doc9854/2005: Global Air Traffic Management Operational Concept. Doc9883/2009: Manual on Global Performance of Air Navigation System. Doc PBTG/2007: Performance Based Transition Guidelines Version 0.51. L. Fortuin, “Performance indicators, why, where and how?” European Journal of Operations Research, 34, pp. 1-9, 1988. G.K. Kanji, “Performance measurement system”,
TOTAL QUALITY MANAGEMENT, VOL. 13, Nº 5, 2002, 715- 728, 2002. D. Otley, “Extending the boundaries of management accounting research: developing systems for performance management”. Lancaster University. British Accounting Review, 33, 243–261, 2001. A. Neely, M. Gregory, K. Platts, “Performance measurement system design: A literature review and research agenda”. International Journal of Operations & Production Management, Vol. 15 Iss: 4, pp. 80 – 116, 1995. Regulation (EU) Nº 390/2013,
laying down a performance scheme for air navigation services and network functions. T. Wettstein, and P. Kueng, “A maturity model for performance measurement systems”, WIT Press, Ashurst Lodge, Southampton, SO40 7AA, UK, 2002.
Note: Appendixes are in the pages below.
APPENDIX A Table 1: ICAO KPAs and KPIs KPA
KPIs
Unsatisfied demand versus overall
demand (measured in volume of airspace time).
KPA 01: Access and Equity KPA 02: Capacity
Direct capacity measures fall into three types: system-wide, airspace and airport capacity. System-wide Airspace Airport
KPA 03: Cost effectiveness
The number of flights, flight hours and flight Number of IFR flights able to The number of movements kilometres that can be accommodated. enter an airspace volume. per unit of time that can be The number of flights, flight hours and flight The agreed- upon airspace accepted during different distance that can be accommodated, measuring capacity rates, which are the meteorological conditions. the actual number that were produced. This number of IFR flights able to be approach looks at the number of flights, present in sectors at any one time available plane miles, etc. It focuses on the cost of ATM normalized per flight. Average cost per flight at a system-wide annual level;
Total operating cost plus cost of capital divided by IFR flights; and
Total labour obligations to deliver one forecast IFR flight in the system, measured monthly and year-to- date.
KPA 04: Efficiency
It should comprise both focus areas “Temporal Efficiency” (i.e. delay) and “Flight Efficiency” (trajectory oriented). Per cent of flights departing on-time;
Average departure delay of delayed flights;
Per cent of flights with normal flight duration; and
Average flight duration extension of flights with an extended flight duration.
Or Per cent of flights with on-time arrival at a predetermined set of airports; and
Total number of minutes to actual gate arrival time exceeding planned arrival time on a per flight basis at the predetermined set of airports.
KPA 05: Environment
Amount of emissions (CO2, NOx, H2O and particulate) which are attributable to inefficiencies in ATM service provision;
Number of people exposed to significant noise as measured by a three-year moving average; and
Fuel efficiency per revenue plane-mile as measured by a three-year moving average.
KPA 06: Flexibility
Number of rejected changes to the number of proposed changes (during any and all phases of flight) to the number of flight plans initially filed each year.
Proportion of rejected changes for which an alternative was offered and taken.
KPA 07: Global Interoperabi lity
It focuses on the area of level of compliance with international Standards. The number of filed differences with ICAO Standards and Recommended Practices. A second organization describes the following indicator: Level of compliance of ATM operations with ICAO CNS/ATM plans and global interoperability requirements.
Kpa 08: Participation by the ATM community
KPA 09: Predictabilit y KPA 10: Safety KPA 11: Security
Number of yearly meetings covering planning, implementation and operation, and covering a significant estimated proportion (e.g. 90 per cent) of the whole of the regional aviation activity;
Number of yearly meetings for planning;
Number of yearly meetings for implementation; and
Number of yearly meetings for operations.
Some delay measures included in the efficiency KPA are considered to be measures of predictability as well. For predictability indicators expressed through delay, the issues are the same as for those in the efficiency KPA.
Number of accidents normalized through either the number of operations or the total flight hours.
Number of acts of unlawful interference reported against air traffic service provider fixed infrastructure;
Number of incidents involving direct unlawful interference to aircraft (bomb threat, hijack, or imitative deception) that required air traffic service provider response; and
Number of incidents due to unintentional factors, such as human error, natural disasters, etc. that have led to an unacceptable reduction in air navigation system capacity.
APPENDIX B
Table 2: EUR Region KPAs and KPIs KPAs KPA 01: Safety
KPA 02: Capacity KPA 03: Efficiency and environment
KPA 04: Costeffectiveness KPA 05: Participation by the ATM community
KPIs Effectiveness of Safety Management
Level of State Safety/Just Culture (safety/just culture survey)
Application of a common methodology for classification of occurrences in terms of risk severity (harmonized occurrences severity classification methodology)
En-route ATFM delays: Average ATFM delay per flight generated by the airspace volume (en-route)
Airport ATFM delays: Average ATFM delay per flight in the main airports. Efficiency: Average horizontal en route flight efficiency, defined as the difference between the length of the en route part of the actual trajectory (where available) or last flight planned route and the great circle. Environment: CO2 emissions deriving from inefficiencies in flight efficiency (conversion of additional distance into CO2 emissions based on standard values formula). IFR flights (en-route) per ATCO hour on duty
IFR flight hours per ATCO hour on duty
IFR movements (airport) per ATCO hour on duty
Level of participation of States and international organizations to planning and implementation meetings Level of responses to State Letters asking for information on planning and implementation aspects
Level of provision of performance results from States for Regional Performance Review Report (RPRR)
APPENDIX C Table 3: EU KPAs and KPIs KPAs KPA 01: Safety
KPA 02: Environment KPA 03: Capacity KPA 04: Cost-efficiency
KPIS on Union level The minimum level of the effectiveness of safety management. The percentage of application of the severity classification based on the Risk Analysis Tool (RAT) methodology to the reporting of, as a minimum, three categories of occurrences: separation minima infringements, runway incursions and ATM-specific occurrences at all air traffic services units.
The average horizontal en route flight efficiency of the actual trajectory. The average horizontal en route flight efficiency of the last filed flight plan trajectory.
The average Union wide determined unit cost (DUC) for en route air navigation services. The average Union wide determined unit cost (DUC) for terminal air navigation services.
The average minutes of en route ATFM (Air Traffic Flow Management) delay per flight attributable to air navigation services.
APPENDIX D Table 4: Summary of common KPAs and KPIs related
KPAs Safety
ICAO Normalized number of accidents.
KPIs EUR Region Effectiveness of Safety Management
Level of State Safety/Just Culture (safety/just culture survey)
Application of a common methodology for classification of occurrences in terms of risk severity
EU The minimum level of the effectiveness of safety management. The percentage of application of the severity classification based on the Risk Analysis Tool (RAT) methodology. En-route ATFM delays per flight.
Capacity
The number of flights, flight hours and flight kilometres that can be accommodated. (System-wide, Airspace & Airport)
ATFM delays per flight (En-route & Airports)
Environment
Amount of emissions which are attributable to inefficiencies in ATM service provision;
Number of people exposed to significant noise; and
Fuel efficiency.
Efficiency: Average horizontal en route flight efficiency. Environment: CO2 emissions deriving from inefficiencies in flight efficiency.
The average horizontal en route flight efficiency of the actual trajectory, or the last filed flight plan trajectory.
Costeffectiveness
Normalized ATM costs Average ATM costs. Total operating cost plus cost of capital.
CNS/ATM costs. ATCO productivity.
Participation by the ATM community
Level of participation of States and international organizations to planning and implementation meetings Level of responses to State Letters asking for information on planning and implementation aspects
Level of provision of performance results from States for Regional Performance Review Report (RPRR)
Not measured
Number of yearly meetings covering planning, implementation and operation, and covering a significant estimated proportion of the whole of the regional aviation activity;
ANS costs. The determined unit cost (DUC) for en-route ANS