Introduction - SCB

4 downloads 0 Views 142KB Size Report
interrelationship between users and producers as complex as it is in official ...... The monthly cheddar cheese price was extremely successful and NASS later ...
Quality and Users Chapter of the final report of the LEG on Quality

Michel Blanc (INSEE), Walter Radermacher and Thomas Körner (Statistisches Bundesamt)

Summary One of the key principles of quality management in official statistics is user orientation. The types of users are manifold and the relationship between users and producers is very complex. This difficult situation requires much attention and will be one of the main fields of interest in the next years. One important reason for the great variety of user types lies in the fact that statistical information (as the main product of NSIs) has to be provided both as public good (informational infrastructure for democratic societies) and as a private good (tailor-made analyses on the demand of individual customers). Different types of users with different and (partly) conflicting requirements correspond to this distinction. In this respect the statistical product differs from many other products on the market. In addition to the diverse and partly conflicting user needs, the relationship of the producer with each single user is very complex in itself. A complex interaction between user and producer has to be established in order to achieve an optimum solution. In this user-producer dialogue, user and producer negotiate and define the statistical working system comprising the statistical programme as well as the products and the processes.

Introduction When it comes to quality, there are few other statements we could as instantly agree upon as the demand that official statistics has to be “user oriented”: “It is for users to decide. They are the people who determine quality criteria” heralds a recent publication on quality in official statistics. Statisticians, it is argued, are “no longer ‘number freaks’ in a world of their own” but become “managers of statistics” who stay “in constant touch with those who make decisions” (Franchet 1999a: 3). Staying in touch with the users could, however, be a more difficult assignment than it seems at first sight. Improving user orientation is a complex issue - and this is especially true for official statistics. In few other areas the categories of users are as manifold. Seldom is the interrelationship between users and producers as complex as it is in official statistics. In this contribution, we would like to make this complexity a bit clearer and to show some ways of dealing with user orientation. Accordingly, the chapter starts with a general outline of the types of users of official statistics, and the characterisation of the interrelationship between users and producers (the userproducer dialogue). In a second step, we will introduce some fundamental concepts on what quality management and quality improvement mean in this context. Quality, we argue, can only be attained in an optimisation process in which an optimum mixture of quality items has to be found. Thirdly, we will name some examples of instruments of optimisation. As regards 13

-2statistical councils - important instruments in this respect - the current situation in the European National Statistical Institutes (NSIs) is briefly outlined. The chapter has been written as part of the work of the LEG on Quality. Recommendations on the chapter can be found in the summary LEG report.

1 Characterisation of statistics as an economic production process If we want to find ways that help us to improve quality (as perceived by the users) first of all some basic considerations have to be made. The general idea of improvement is a good vision. Yet, in order to find concrete measures to improve the quality (again, as expected by the user) we require some specifications. In this section, we will try to outline those specifications which concern the nature of the products NSIs provide and the types of customers provided with these goods. Finally, the basic components of the interaction between the users and the producers of statistical information will be shown. 1.1 The nature of the product "statistical information" The main product NSIs provide can roughly and generally be defined as “statistical information”. A considerable proportion of the complexity and problems of the relationship between users and producers stem from this initial definition. For “statistical information” can be provided both as a public good and as a private good. In all European NSIs both types are present (yet in different proportions): • As a public good, official statistics provides an informational infrastructure for democratic societies and their decision processes. According to the economics of public goods, goods are referred to as public (or “collective”) if they (1) cannot, practically, be withheld from one individual consumer without withholding them from others (“nonexcludibility”), and if (2) for them the marginal cost of an additional person consuming them, once they have been produced, is zero (“nonrivalrous consumption”). It is very unlikely that public goods can be profitably provided on a free market for these reasons. This is true of the “infrastructure function” of official statistics as well. • As a private good, statistics can be traded on the free information market like any other private good. Besides the informational infrastructure made available to all citizens, NSIs at the same produce time tailor-made analyses for individual customers; to these cases the principles of the economics of public goods do not apply. By producing analyses on the demand of individual customers, NSIs are doing something quite similar as information providers in the private sector. Statistical information is provided both as a private and as a public good in all European NSIs.1 Yet, the mix of these alternatives varies according to cultural differences within the European NSIs. Sketching a very rough picture, one could state that Anglo-Saxon and Nordic countries in a larger proportion provide statistical information as “private good” than the southern and central European member states do. Recent development trends make it seem likely that in general the statistical information provided as “private good” will rather

1

Examples show that both types can very well be integrated in one single comprehensive marketing model. Cf., just to name one, the German case: Knoche/von Oppeln-Bronikowski/Kühn 1999.

-3-

-3increase in its proportion. In this way, e.g. know-how and potentials of the NSIs can be used more comprehensively than in the past and budget cuts can be partly balanced. 1.2 Types of users of statistics Taking the distinction between public goods and private goods as a starting point, the main challenge for official statistics in meeting its users needs is to satisfy the public demand and the private demand at the same time. This is not a simple assignment, as the public demand and the private demand differ in many respects (see Figure 1): • Users of the public good “statistics” are (1) the “society” (or state) as a whole and its representatives in (democratic) decision processes including stakeholders of different interests and (2) all the citizens (or, to use the French word, "citoyens") living within the society with their diverse expectations towards official statistics (in many areas probably as vague to the user him or herself as unknown to the statistician). It can easily be seen that the types of users of the public good are already manifold, ranging from a government agency to an academic institute co-operating with an NSI in research to a union and all the way to a single citizen searching for specific statistical information.2 As diverse as these users of the public good “statistics” seem to be, they nevertheless have some features in common. It is a long-term demand (materialising in statistics laws and regulations), which is expressed not (only) directly by the users but (also) via the political representatives. The infrastructure, the products and the programme provided stem from a complex socio-political dialogue in which the terms are not (and probably cannot fully be) explicitly fixed. • On the contrary, users of the private good are individual customers as we know them from the private sector. They are free to bargain the conditions of data use, fixing the result of the bargaining in short-term contracts. These contracts contain specific agreements on what will be provided in what time and at what price. The demand and expectations of the individual users can be determined in a direct dialogue. public/social demand “citizen”, “society” long-term demand expressed by the political representatives socio-political dialogue; government setting priorities terms are not (explicitly) specified

individual demand “customer” short-term contract free bargaining (market) contract between NSI and individual partner terms are explicitly specified

Figure 1: Public demand versus private demand in official statistics

The distinction shows that the way to user orientation in official statistics is far from being an easy one. Beyond the catchword “customer orientation”, processes are extremely complex, and the instruments for meeting public demand and private demand cannot always be the same. More generally, in order to be able to improve user satisfaction, it must be clear, first of all, whether we are talking about a public or an individual demand. It is only after this initial step that the instruments for improving user satisfaction (described below) can be used properly. 2

An overview of some types of users, public and private, is given by Franchet 1999b: pp. 91-93. Unfortunately, precise knowledge on the different uses of statistical information is still lacking. This would be an important subject for further research in social and political sciences. Some recent changes in the uses of statistics can nevertheless be stated: On the one hand, there is an increasing demand for short-term statistics (e.g., from the ECB) and for a very fast delivery of these statistics (which is not without consequence on “quality mixes”, see 2.1). On the other hand the demand for regional and local statistical information has grown in the context of the

-4-

-41.3 The complex relationship of users and producers in statistics Apart from the diversity of user types, the relationship between users and producers in statistics is in itself a rather complex one. Statistical products cannot be easily defined and “used” as it may be the case with certain types of physical goods. In statistics, more complex arrangements must be made. The four cornerstones in the relationship of users and producers in statistics are summarised in Figure 2.

user

User requirements

producer Negotiation about and definition of • programme • products • processes

Implementation of processes, provision of products and services

Application

Statistical information

Working system

Interpretation of products

Statistical figures

Figure 2: The user-producer dialogue Taking the user requirements to be met as the pivot, a complex interaction between user and producer is necessary in order to find an optimum solution. Normally, this optimum solution can not be attained in every respect. A dialogue has to take place in which user and producer agree upon a solution suitable for both the users needs and the producers capacities. As depicted in Figure 2, we distinguish four steps. Step 1: User and producer have to carry out negotiations concerning the working system. The statistical work consists of the main characteristics of the statistical products (e.g. the quality characteristics like timeliness, accuracy, comparability and others), the broad framework of processes, organisational structure and methods as well as the statistical programme (and its “relevance”) as a whole. In the negotiations, users and producers agree upon a solution, meeting on the one hand the needs of the users and on the other hand the restrictions of the producers (e.g. in terms of budget restrictions, personnel available or other external conditions). Step 2: The producer puts this statistical working system into practice in order to obtain statistical figures. As this step takes place mainly within the statistical institute, users do not play a crucial role here. Yet, it should be noted that the statistical institutes have to make sure that the statistical working system is put into practice according to the agreement. This realisation does not concern the user directly but can, nevertheless, be made trasparent, e.g. by documentation of quality standards, (internal or external) certification that the standards regional policy of the EU.

-5-

-5are met or quality awarding. Audits, peer reviews and internal quality awards (as practised at Statistics Sweden) are cases in point. Once the statistical figures are produced, the user again enters into the dialogue with the producer. In step 3, the raw material (figures) is turned into the ready-to-use product, i. e. statistical information. “Raw” data in almost every case require interpretation to be applicable by the users. Interpretation of statistical figures is an important assignment of statistical institutes in at least two respects: (1) Metadata must be provided for the data. And (2) connections and comparisons with data from other sources or different areas must be provided if necessary. Finally (step 4) the statistical information is applied by the user (let’s hope to his or her full satisfaction).

2 How to address the question of quality 2.1 "Quality mixes" and the optimisation process As we have seen, the meeting of user demands requires negotiations about the statistical working system. In these negotiations, user and producer agree upon a specific “quality mix” for the individual user. Quality of statistics is a complex item, comprising a differentiated set of user requirements concerning the statistical product. This "quality mix" consists of a wide range of elements. Users and producers of statistics have to agree upon, e.g., a survey design in which the objectives of the users and the external conditions of the statistical institute are balanced. In this context, quality comprises criteria like accuracy of data, timeliness, relevance, comparability. Which quality elements are comprised and how the concrete requirements are defined is a matter for an optimisation process. In this process, user and producer have to solve the problem that, given the limitation of resources, it is not possible to obtain optimum results in every respect: First of all, the different items of the quality mix are (at least partially) in mutual conflict. An increase in accuracy will, e.g., in many cases cause a deterioration in timeliness. Secondly, several external restrictions have to be taken into account. Besides the limitation of resources in general, the legal framework may prescribe the cornerstones of survey design quite strictly, the personnel available in the statistical office may allow the use of certain methods only etc. The result of this optimisation process is a convention, which we call the “statistical working system”. To summarise: Quality has to be understood as the result of an optimisation process in which: • (firstly) an optimum mixture of quality items with respect to user needs has to be found and in which • (secondly) this quality mix has to be achieved with respect to the current external conditions. An adequate statistical working system is optimal in this double meaning. 2.2 What can be achieved by statistical “measurements”? Furthermore, there is a fundamental limit for statistics that should be mentioned in that context: In statistics as well as in social sciences and in economics, concepts have to be -6-

-6translated into measurable terms (see, e.g., Flaskämper 1933; Grohmann 1985; Litz 1990). In the process, called in Germany and France the “process of adequation”, users often have to accept “losses”. Take, for example, the term “depreciation“, which is rather clear in economic theory. If depreciation has to be calculated a lot of conventions (and simplifications) have to be made in order to achieve measurability. One may argue that this is only true of terms of such a highly theoretical nature. This is, however, not true. Even obviously simple terms like “forest” or “household” are as such only “ideal types” that represent some undefined subsets of an extremely complex reality. As a consequence it can be stated that the role of statistics is by definition limited to decision support. In a concept of procedural rationality (Radermacher 1999) this means that, on the one hand, decisions should be based on figures. On the other hand, it makes clear that figures normally cannot substitute the decision: in the end there will be room beyond the figures in which decision makers have to find a solution by intuition, bargaining or other elements of their tool box. A realistic assessment of what statistics can deliver and what not, is crucial for a fruitful dialogue between users and producers. Somewhere in between the extreme positions of statistical scepticism on the one hand and “arithmomania” (Georgescu-Roegen 1971) on the other lies a point of truth. It is one of the major tasks of a statistical service to review the position of this point continuously in the light of new experiences and to promote the user’s general understanding of statistics. 2.3 The measurement of data quality: another “problem of adequation” Bearing these considerations in mind, it should be evident that the measurement of data quality becomes itself a problem of adequation: Figures used for the measurement of quality are (in analogy with statistical figures) based on a convention in which a quality concept translates theoretical notions into measurable terms. It should be noted that the quality of statistical data is quantifiable only to a certain degree. As in the more general translation of theoretical into statistical concepts, there is a “fallacy of misplaced concreteness” (Daly/Cobb 1989) in this respect as well: Excessive use of quantitative indicators will be misleading. We should - partly - work on a nominal scale on which the quality items stand side by side without any kind of ranking. 2.4 Paths towards the improvement of user satisfaction As a consequence from these considerations it is evident that the quality of statistical information comprises the following elements: • Relevance of the working system: Does the statistical working system define the best realisation of theoretical user requirements within the financial and methodological restrictions? • Data quality realised by the working system: Which level of quality should be attained by the (measurable) quality items? • Quality in the application of the working system: Does the statistical service properly realise the convention that has been contracted with the users? -7-

-7•

Interpretation of the statistical figures for the users: Do interpretations supply the information users can expect on the basis of the convention?

These are the features the improvement of user satisfaction has to take into consideration: First of all, as a basis of the user-producer dialogue, statistical concepts have to be constructed that meet the users needs. Secondly, when it comes to the concrete design of a working system, normative settings are perhaps unavoidable (i.e. “timeliness is more important than accuracy”). In order to improve user satisfaction, users and producers should co-operate in the construction of these normative settings. Thirdly, producers should give instructions to the users concerning the interpretation of statistical figures. In the following section some instruments for these aspects of the user-producer dialogue are described.

3 Instruments of optimisation of the user-producer dialogue For an effective user-producer-dialogue, two stages in the beginning and the end of the production process are pivotal: The “Alpha” is the detection of the user’s needs in the beginning of the production process in order to integrate users in the planning process of statistics. Here, problems are generated particularly by the heterogeneous interests of user groups. An optimum, hence, can only be expected in the sense of a Pareto optimum, in which the aggregated user needs are maximised. In practice this means: Ensure that key users play a key role! In addition, producers have to maintain an awareness of the future needs of their users. This requires close attention to emerging policy issues, and an assessment of the information needed in the context of these issues (Brackstone 1993, 5052). Furthermore, it has to be clarified what amount of resources is available for the production of a statistical figure: The level of quality an NSI and the respective resources should be clearly defined. Normally, a higher level of quality requires a relatively higher budget. Hence, there should be a concepts for priority setting in budget planning and for the management of resources in the optimisation process (both for statistics provided as pubilc and as private goods). The “Omega” is the control whether the users are satisfied (again, in the sense of a Pareto optimum). Corresponding surveys (“customer satisfaction surveys” etc.) may help to quantify some elements of a feed back from the users. However, not too much should be expected too much from these surveys. They should be complemented through in-depth interviews and reviews of the data in the statistical councils. In this chapter we give an overview on some instruments and their current use in European NSIs. 3.1 Instruments to integrate users in the planning process of statistics - The current situation in Europe In order to make users play a central role in the planning and in the development of surveys, various instruments can be used. Such instruments which help to discover and specify the needs of the users and to implement them successfully include the following: -8-

-8• statistical councils, i.e. institutions in which the general development of the statistical programmes is discussed by experts external to the statistical institution • producer/user groups (e.g., sub-committees of statistical councils which take care of specific problems of individual statistical areas) • customer surveys exploring the needs of a larger group of users • formalised agreements between the producers and especially important users of statistics (e.g., Service Level Agreements in the UK Office for National Statistics) • research in social sciences about the different uses made of statistics • marketing concepts integrating the information gained by the use of the instruments • co-operation with partners in the social sciences and economics as well as in market research This chapter aims at giving a brief overview of how these instruments are currently used in the European National Statistical Institutes (NSI).3 In Europe, the use of these instruments shows a remarkable resemblance in the respective statistical systems. But also undeniable differences have to be stated. These differences are, on the one hand, the result of the variety of cultural and institutional backgrounds in European official statistics. On the other hand, they point to different ways of focussing on the relationship between users and producers of statistics. In Europe, the statistical councils are still the most important institutions aiming at an integration of users in the process of reviewing and improving of statistics. Statistical councils exist in almost all European NSIs as well as at Eurostat - and often they have already existed for many decades. Therefore, it can be supposed that they still constitute the most important “channel” in which the demand can be studied and discussed. Despite the existence of statistical councils in many European countries a comparison shows important differences in the way the councils work and how they are organised. In most countries statistical councils have been created in order to establish a link between users and producers of statistics, to program statistical operations, to justify new surveys and to fulfil tasks that could be referred to as “auditing”. In most countries the statistical council does not take binding decisions (with the exception of the Netherlands, Italy and - partly Portugal). However, it has to be stated that the advice of many councils is usually said to have a considerable influence on changes and developments within the NSIs. On the institutional level differences are already visible in the composition of the respective councils: the number of members varies from 8 (UK Statistical Commission) to 170 (CNIS4 of the French INSEE5). Especially the larger councils have a differentiated structure of subcommittees dealing with more specific issues. A special position is assumed by the Swedish Programme Councils which are organised on the level of specific statistical areas only (without a centralised overhead body). Regarding their composition, two types of councils can be distinguished: the “independent expert” type and the “interest group” type. Councils of the first type exist in the UK, the 3

The following analysis is based on a questionnaire developed by the LEG on Quality. This questionnaire has been answered by nine European NSIs and Eurostat. 4 Conseil National de l'Information Statistique. 5 Institut National de la Statistique et des Études Économiques.

-9-

-9Netherlands and in the Austrian Statistical Council (“Statistikrat”). Here the (few) individual members of the council are (formally) independent from any organisation, they do not explicitly represent the views and interests of a user of statistical information but serve as impartial agents with a high reputation. However, it goes without saying that even “independent” members also represent the organisation in which they have been “socialised” or by which they have been nominated. Most other statistical councils are of the “interest group” type, i.e. they consist of about 30 to 50 members, each representing a government agency (on the local, regional or national level) or an interest group (e.g., trade associations, trade unions, employers associations, universities, central bank). The Austrian Central Statistical Committee (“Statistisches Zentralkomitee”) is a case in point. Most institutions of this type characteristically have of about 20 sub-committees which deal with more specific issues in the various statistical areas. In these sub-committees a user-producer dialogue concerning the further improvement of individual surveys can be put on the agenda. The functions assumed by the councils are - in spite of the organisational differences largely the same. The main task of most councils is to review the statistical programme. However, a closer look at the functions assumed shows interesting differences as well. Figure 3 gives an overview of a range of possible functions of statistical councils. Benefits

Cost / Resources

Programme

review of the statistical programme setting of priorities

allocation of resources to statistical areas budget control

Products

definition of relevant products setting of quality indicators (e.g. accuracy, timeliness, comparability etc.) auditing of product quality checking user satisfaction setting of quality requirements and indicators, e.g.: - methods and concepts - documentation auditing of process quality

allocation of budgets per product budget control

Processes

controlling of the processes to achieve cost-effectiveness

Figure 3: Possible functions of a statistical council According to Figure 3, statistical councils can, on the one hand, work on issues concerning the “benefits” of the production processes (to a lesser or larger extent). Here a further distinction is necessary: The only function assumed by every council is the review of the statistical programme. Nevertheless the majority of the institutions deal with questions of priority setting, definition of relevant products, auditing of product quality and the review of user satisfaction. Only a minority of the councils take care of quality requirements for processes (methods and documentation) and process auditing. On the other hand, there is the “cost/resources”-dimension of the production process (“At what expense can the statistics be produced?”). Only a small number of statistical councils deal with issues of this type. The allocation of budget per product and the cost-effectiveness controlling is discussed in none of the statistical councils included in our analysis. Three NSIs have indicated that the allocation of resources within the statistical programme is at least a - 10 -

- 10 de facto task of their council. An interesting case in point is the Austrian model, where the Statistical Council has the task of resource allocation and for the whole range of cost-related issues a Commercial Council (“Wirtschaftsrat”) has been installed analogously to the supervisory board of a joint-stock company. Apart from the statistical councils, customer surveys constitute another important tool to detect user needs and to integrate users in the planning process of statistics. A brief look at the current situation in Europe shows that only very few NSIs use customer surveys on a systematic and regular basis (among these few are the NSIs of the Netherlands and of Sweden). A great majority of NSIs use customer surveys only occasionally but indicated that the introduction of customer surveys on a larger scale is on their agenda. Another instrument that merits to be mentioned in this context are the so called Service Level Agreements used in the Office for National Statistics in the United Kingdom. In order to manage well the user-producer dialogue with important key users of statistical information, the ONS has put into place a set of “concordats” and “service level agreements”. Concordats describe the relationship between the two organisations, including their respective roles, the avenues of communication, how they will work together and what service level and results they can expect from the ONS. They also include arrangements for consulting on matters of mutual interest. Such a concordat exists, e.g., between the ONS and HM Treasury. The concordat sets out the general arrangements for a number of Service Level Agreements and provides a list of these agreements. The Service Level Agreements themselves cover a set of issues which are (among other things) relevant to the quality of the respective statistics. These issues may include: • Description of services (detailed in an annex) • Performance targets (the results to be delivered and the delivery timetable) • Steering and management arrangements, also communication mechanisms • Financial arrangements, charges: regular work free, ad hoc analyses at cost • Performance monitoring and reporting arrangements (ONS as responsible for monitoring performance) • Procedures for handling variations and resolution of issues (revisions to annex may be agreed; reference to a higher level committee or top managers in areas of differences) • Review (arrangements for the annual review) • Resolution of disputes and arbitration • Confidentiality (covers the security of individual data) • Ownership of information and intellectual property (ONS to be acknowledged as the source) These examples show various possible ways how the dialogue between users and producers of statistical data can be organised and carried out. It should be noted that the instruments described in this chapter do not exclude one another, but have their own areas of application. While Service Level Agreements have a strong focus on the specific needs of one key user (e. g. a government department) statistical councils (or their respective subcommittees) help to include a larger number of key users. Customer surveys, finally, make it possible to reach an even greater number of - also non-institutional - users. The question of - 11 -

- 11 which instruments are appropriate has to be decided in the context of the statistical system in which they are adopted. The question of which elements are appropriate for a specific area leads to the problem of how conflicting needs of different user groups can be dealt with. 3.2 Integrating users on the European level Within the European Statistical System communication between users and producers is as important as on the national level. On the European level additional obstacles make the userproducer dialogue even more difficult. These specific conditions cannot be treated in depth in this contribution. Let us just briefly mention two committees working in analogy to the statistical councils on the national level: The CEIES6 and the CDIS7. In the CEIES and its three sub-committees member states of the EU are represented by two (private) members each. The CEIES is chaired by the Commissioner responsible for statistical information and has the task to “assist the Council and the Commission in the co-ordination of the objectives of the Community’s statistical information policy, taking into account user requirements and the costs borne by the information producers.” In the CDIS - a committee chaired by Eurostat - the Commission services involved in statistics are represented. 3.3 Enhancing user awareness Improvements in the user-producer dialogue are, in general, an objective serving to improve the relevance of statistical information. Both sides of this dialogue may cause disturbances and misunderstandings: • Statisticians often expect too much know-how and expertise from their users. They are not prepared or simply hesitant to simplify the message in the light of user requirements in specific areas. • Users suffer from gaps in the education system. “Innumeracy” (as an analogous phenomenon to illiteracy; for the notion of innumeracy, see, e.g., Paulos 1989; Dewdney 1993; Schneeweiß 2001) contrasts with the trend of making growing use of statistical figures in the media, management, and the public sector. Though not as visible as illiteracy or general cultural ignorance, this type of mathematical illiteracy, it has recently been argued, has a considerable impact on everyday life (e.g., muddled government decisions, media coverage on complex phenomena). Hence, for an effective user-producer dialogue, the quality characteristics as well as the strengths and limitations of the statistics produced should be clearly communicated to the users. Examples of programmes aiming at the enhancement of user awareness in the ESS include user seminars in the context of the release of figures, meetings for the discussion of key users needs, special press conferences providing background information on the figures released as well as “road shows” on the interpretation regional data.

4 Institutional and cultural differences The present situation in the statistical systems of different countries has emerged from longterm relationships between users and producers of statistics in specific social environments. 6

Comité consultatif européen de l'information statistique dans les domaines économique et social (European Advisory Committee on Statistical Infromation in the Economic and Social Spheres) 7 Comité directeur de l'information statistique

- 12 -

- 12 Looking at statistics from such a historical perspective makes it possible to identify cultural and institutional conditions as determining factors and as results of those processes (Desrosières 1998). In particular, eight factors can be distinguished in that context: 1. Dominant culture: probability methodology, national accounts, integration, economic studies, social customs, market outlets... (is the addressee thought of as “customer”, “citizen” or simply “user”?). 2. Relative part and components of statistical methodology. Sharing of the tasks between “methodological experts” and “field specialists”. 3. Perception and re-translation of the expectations of society: dissemination, contractual request, statistical council, part played by university research in economy and social sciences. 4. The profession of a statistician: training, mobility, prospects. 5. Centralisation and its various dimensions: inter-administrative and territorial. Part played by the regional and local authorities. 6. Perception of international relations. 7. Relations with the market (upstream and downstream). 8. Management methods and part played by the departments. Looking at the wide variety of statistical systems in Europe from that angle it can be presumed that each country has elaborated a solution which even if not optimal - at least satisfies the user needs under current restrictions. Of course, an improvement of these balances has to be a permanent objective. Furthermore, the current task in many countries is: “Find a new solution with less money and more quality!” Evidently, this mission would be impossible if there were no new factors which could provide new degrees of freedom. One of these factors is a systematic management of quality. In addition to that present-day challenge within the countries requirements are set by the European Union which clearly ask for additional figures or changes in the national working systems. Independently of the question whether these requirements have been negotiated properly with the European Statistical Service it means that country specific solutions partly have to be given up in order to achieve a European optimum. This change process, however, has to be carried out in a reasonable time span. Consequently, if the country solutions have emerged from developments of historical dimensions one should not overestimate the potential for rapid changes, an aspect which is a link between even more heterogeneous user groups on the one hand and a variety of statistical offices on the other. A meaningful adjustment of the eight factors, consequently, is a necessary precondition for a new stable balance in the ESS.

5 Conclusions 1. The concept of quality has to take into account the nature of products that are provided by statistical services. 2. Quality management can and should help (a) to improve quality in the countries and (b) to achieve a solution for the convention concerning “relevance” on a European level. 3. User orientation means (a) that user requirements determine de facto the planning process and (b) that there is a need to check user satisfaction.

- 13 -

- 13 4. The ways in which users are integrated into the planning process differ widely. It should be an objective in the long run to empower users in particular in the concrete planning of the statistical programme and main statistical products (= the statistical system). As quality and the variation of quality within the programme is not least a question of resources and their allocation, also the resource side should be explicitly linked to user needs and their (aggregated) willingness-to-pay. 5. The system of official statistics is more than the sum of individual products. On the contrary, users ask more and more for combinations of products and services of statistical offices. It is therefore important to ensure that statistical systems can develop in a sustainable manner. A “pay & research" policy would be suitable only for isolated statistical products with the character of private goods (what is the exception). As a consequence, the main entrance into a new and better solution of the quality problem is signed with the heading “Relevance of the Statistical System”. Whether the present institutional arrangements of the user-producer dialogue are still effective and efficient in that sense has to be carefully reviewed in the countries as well as on the European level.

6 References Brackstone, Gordon J., 1993: Data Relevance. Keeping Pace with User Needs. In: Journal of Official Statistics 9, 49-56. Daly, Herman E. and John B. Cobb, 1989: For the Common Good. Boston: Beacon Press. Desrosières, Alain, 1998: The Politics of Large Numbers. A History of Statistical Reasoning. Cambridge, London: Harvard University Press. Dewdney, Alexander K., 1993: 200 Percent of Nothing : An Eye-Opening Tour Through the Twists and Turns of Math Abuse and Innumeracy. New York etc.: John Wiley and Sons. Flaskämper, Paul, 1933: Die Bedeutung der Zahl für die Sozialwissenschaften. In: Allgemeines Statistisches Archiv, 23, 58-71 Franchet, Yves, 1999a: Statistics & Quality go Hand-in-Hand. In: : Eurostat (ed.): Quality Work and Quality Assurance within Statistics. DGINS Conference in Stockholm. Luxembourg, 3. Franchet, Yves, 1999b: Improving the Quality of the ESS. In: Eurostat (ed.): Quality Work and Quality Assurance within Statistics. DGINS Conference in Stockholm. Luxembourg, 88-113. Georgescu-Roegen, Nicholas, 1971: The Entropy Law and the Economic Process. Cambridge, Mass.: Harvard University Press. Grohmann, Heinz, 1985: Vom theoretischen Konstrukt zum statistischen Begriff - das Adäquationsproblem. In: Allgemeines Statistisches Archiv 69, 1-15. Knoche, Peter, von Oppeln-Bronikowski, Sibylle and Diemar Kühn, 1999: Marketingkonzept der Statistischen Ämter des Bundes und der Länder. In: Wirtschaft und Statistik, no. 7/1999, 531-538. Litz, Hans-Peter, 1990: Statistische Adäquation und Idealtypus. Anmerkungen zur Methodologie der Wirtschafts- und Sozialstatistik. In: Allgemeines Statistisches Archiv 74, 429-456. Paulos, John Allen, 1989: Innumeracy. Mathematical Illiteracy and its Consequences. New York: Hill and Young. Radermacher, Walter, 1999: Indicators, Green Accounting and Environment Statistics – Information Requirements for Sustainable Development. In: International Statistical Review 67, 339-354. Schneeweiß, Hans, 2001: Die Wahrnehmung der Statistik in der Öffentlichkeit. In: Allgemeines Statistisches Archiv 85, 151-172.

CD-15-2

UNDERSTANDING USERS AND MANAGING QUALITY IN A STATISTICAL AGENCY Susan Linacre, Office for National Statistics, UK 1. Introduction: Quality is not absolute, nor is it a specific characteristic of a product or service considered on its own. It relates to the intended use of the product or service, and needs to be considered within the context of that use. What is a good quality product for one user, may be a very poor product for another. Furthermore quality is a multidimensional concept. For statistical products its dimensions include relevance, accuracy, timeliness, coherence, and accessibility including interpretability. These can be in conflict with each other.Pursuing timeliness may put accuracy at risk, coherence may conflict with relevance. This makes it difficult for an organsiation to pursue a strategy for quality. How should priorities be assessed, which uses and aspects of quality should be given priority? Trying to please one user on one dimesion, may cause problems for another. Furthermore an aspect of quality for many users is the credibility that goes with the statistical brand of the producing agency. Diluting this brand by providing fitness for purpose for a variety of users will impact on the credibility of the brand, and hence the value of all statistics produced by the brand. Judgements must be made not only about fitness for a specific use, but also fitness in terms of any implications for the agency reputation and the credibility of its full range of products. This paper provides ideas on strategies for achieving quality in a National Statistical Institute (NSI), coming at it from the dual perspective of the role of an NSI, and hence its portfolio of products, and from the range of users and uses of these products. It firstly discusses the special role of the NSI in the country's information system, and the portfolio of output types that an NSI produces. It then couples this with a simplified classification of the main users of statistical systems and their needs in quality terms, to suggest a proposed strategy for achieving quality, in a realistic way, across a range of product types and groups of users.

Portfolio of Products of a National Statistics Institute Put simply, the main role of a National Statistical Institute is to provide a sound core of high integrity statistics to describe, with credibility, the economic and social state of the nation, and as a basis for planning within that nation. To achieve this, the NSI also has a duty to build the trust of respondents that is necessary for them to provide the high quality data to form a base for the statistics. However the 'sound core' is not a static parcel of statistics that can be readily limited to a particular set of topics and adequately resourced to achieve something close to perfection. There is a huge range of possible statistics that might be covered and, in most cases a fairly limited budget available to provide what coverage is possible. Furthermore the world evolves rapidly, and the core that was relevant yesterday, is

only partially relevant today, with new needs continually emerging. The nature of the NSI role means that it will have a portfolio of products available to meet user needs, with the products in different stages of maturity, and having achieved different levels of 'perfection'. The diagram below shows, in simplified form, the main types of output that are typical of an NSI. Two axes are used. The vertical axis indicates the extent to which the information inherent in the set of statistics is measured directly by objective processes. Low on this axis are products that are based on relatively simple measurement with minimal requirement for theory, modelling or assumptions of any kind. Population census information would be a key output of this type. So would other outputs based largely on measurement: estimates of employment and unemployment from a labour force survey or retail sales from a survey, and so on. Also at this end of this axis are the statistics derived directly from administrative sources. Higher on this axis are those statistics that can only be produced by incorporating some assumptions (theoretically or empirically based) with the measured data. Examples include regional estimates based on synthetic estimation, or productivity measures or derived statistics such as productive capacity. The horizontal axis indicates the extent to which the available data is directly collected for the specific purpose, and hence tailored to that purpose, and constitutes a significant statistical source, and the extent to which it is largely a by product of some other process, but used for statistical purposes. This can be proxied by the two polarities of direct data collection and administrative data, but this is a simplification. National Statistics Measures: Impact of different statistical sources and models

Inedaicsa M utro ers

MODELS

S te ru asl P o licuyres M acstuurre M e a s D e s c r i p t i v e S ystem s grs InS dtircoanto P uisbtlircation A dm in S toyrs S ue rvce

^ ^ ^ ^ versus

Statistical Frameworks

‘ ‘ ‘ ‘ ‘ ‘

Fundamental statistical system measures

DATA HIGH




LOW

Diagram 1.xls

This diagram highlights the existence of a range of different output types. These are all valid, and valued outputs of a NSI, but they have different quality drivers and different quality attributes. Those on the lower left hand side of the diagram are particularly driven by relevance, accuracy and coherence. The core statistics of most

NSIs lie in this part of the graph. They are the source data in many cases for other outputs. They warrant direct collection and justify the imposition of compliance costs to respondents to directly collect the information. For some of these statistics, for example sub annual economic indicator series, timeliness is also a key driver, and because of their multiple use, accessibility and interpretability are of strong importance. The statistics on the right hand side of the diagram and in the upper half of the diagram, are driven by a need for statistics where other alternatives are not readily available, or they are too expensive to obtain by direct collection. Accuracy may be traded to achieve relevance and timeliness. 'Something is better than nothing' can be a motivator for these sorts of statistics. However care is needed here. If accuracy is traded too far, there is a danger of diluting the 'NSI brand' that is associated with the core products nearer the origin. Reducing the brand, reduces the credibility of all the brand outputs, and hence their value, as much of the value of an untestable figure, must lie in the reputation of its producer. NSIs therefore tend to be careful how far outward they extend their portfolio of products, and, as they do, attempt to differentiate them from the core products with labels such as 'experimental'. For these statistics, a key aspect of quality is transparency in method and strong support in the appropriate interpretation of the data, and in judgements about its fitness for use in different circumstances. For statistically literate users, these sorts of statistics can be very useful. Taken off the shelf by unwary users, an important aspect of their quality is the ability to associate them with clear and easy to use 'health warnings'. As indicated by the description of the NSI role above, NSI see themselves existing largely for the public good, to provide a sound description of the economic and social state of the nation, and as a basis for planning within it. It is this role which drives the vast majority of the work programme. However, as a by product of the work of an institute, there will be lots of opportunities for adding value for particular individuals through more tailored products and services. These products will tend to use already available source data to create new products, thus they will tend to be in the top half of Diagram 1. Different User Groups In the same way as we can look at the portfolio of products of an NSI, we can look at the portfolio of users of these products. Diagram 2 below provides a simplified market segmentation for users of NSI products. The market is segmented into groups according to the significance of individual users in the group and the statistical capability of these members. The vertical axis indicates the degree to which individual members of the group are able to influence priorities in the work programme and approach of the NSI, or put another way, the degree to which outputs are tailored to individual needs. The horizontal axis indicates the degree of statistical literacy of the user group, their ability to cope with complexity in terms of information about the statistics, and their ability and willingness to enter into a partnership in terms of judging optimal trade-offs of one aspect of quality against another.

Diagram 2

Diagram 2.xls

Users within the top right hand segment of the above diagram tend to be key clients with whom NSIs develop an ongoing relationship, below them on the right hand side are the other key players in the statistics arena: academics, economic analysts in industry and the media, statisticians in other government departments or levels of government. It is with these two sets of users that an NSI needs to create an environment where the complexity associated with developing and collecting statistical information is well understood, and where users can be engaged in the real challenge of achieving quality. These users will be particularly involved in peer review and feedback on quality issues associated with more model based components of the NSI portfolio. Users on the left hand side are not able to provide real value added in terms of feedback on methodological issues, and the quality tradeoff that are most appropriately made between dimensions like timeliness and accuarcy, but rely on the NSI to 'get it right'. They need a certification system that is simple to use. An analogy might be drawn with the food industry. The right hand side users will want to know the ingredients, and be able to interpret them, possibly with help, in terms of nutritional value, and potential allergies or side effects. Those on the left want to know the fat content, the sugar content, the sodium content etc, or maybe just whether it has a Weight Watchers logo. On the other hand there are some aspects about the delivery of the product that this side has a clear view on and can assist the NSI in improving its quality in. These are things like accessibility, timeliness, relevance, clarity of labelling. The upper left hand quadrant consists of users that the NSI chooses to form a relationship with for some reason, but it might be a very limited relationship in terms of products and may be quite brief in its duration. An example of users in this category would be members of the media who are important in terms of providing effective dissemination of statistical information to the public and warrant the special effort of a tailored service. Another example would be a commercial client who is paying for a value added product of some sort. To provide an effective service to these individuals involves spending time understanding their specific needs, and explaining the characteristics of the solution provided. Users in the lower left quadrant seek accessible low cost, timely services. Table 1. summarises characteristics of the main segments of the above graph, and the quality attributes that are important to them. Given that what constitutes quality for one group of users on a particular product or set of products can be very different from what constitutes quality for another, a one size fits all approach to qulaity management is unlikely to be effective. Also given in the tables are quality management strategies relevant to the different groups.

Table 1:

User Group A: members of the public, students

Characteristic of individual

Quality management strategies ensure NSI represents the Low level of statistical users effectively in terms of capability, interest may be horizon scanning to pick up regular or transient, but is for emerging issues relevant to an 'off the shelf' product, public concerns in terms of delivered quickly and easily the state of society or the with low transaction costs; economy emphasis is on accessibility build credibility of the brand and broad interpretability of so can provide 'good data for their immediate need; housekeeping seal of generally not interested or able approval' in place of to be a partner in quality other detailed metadata on than in feedback on these quality on outputs that paint dimensions. the picture of social and economic conditions However this group is the key use labels such as client of the role of the NSI in experimental, and broad providing a high integrity quality measures (eg one picture of the state of the star statistics through to 5 economy and society, as part star statistics), as an aid to of democratic processes. appropriate use for different parts of the portfolio of products focus on achieving easy access, low transaction cost, for specific queries, and seek feedback on usability of data in these dimensions via web based surveys, tracking use of web, routes through screens, or for paper based users feedback through secondary sources eg public libraries, or telephone inquiy service. pursue mass tailorisation (as for Amazon) to improve delivery service through electronic media. pursue secondary mechanisms to improve quality of access eg supporting public librarians in understanding the statistics available, supporting the media in

provided well presented and interpreted statistics

B: individual organisations who seek a tailored service, eg on a commercial basis, or as members of the media etc.

Low level of statistical capability, interest may be regular or transient, but warrants a tailored approach. Transaction costs will be higher. Effort will often be on appropriate interpretation and use of data, as well as accessibility. Tailoring will generally not be in terms of redefining basic measures, but rather be about their presentation or about value added products derived from them.

C: Sophisticated Users, for example researchers, financial analysts, analysts in policy departments

High level of statistical capability. Ease of access, and ease of interpretation are not as important as completeness, coherence and accuracy of the information available. These users are often interested in all dimensions of quality. They often use the statistics as the basis of further analytic work and need to understand the quality attributes. As a group these are significant users of statistics in terms of extracting value from them and providing consultancy services based on their use. Members of this group are willing and able to act as partners with the NSI in achieving appropriate quality tradeoffs across the portfolio of NSI work.

emphasis is on understanding the individuals particular need in statistical terms and responding to it. provision of statistical expertise to advise the individuals of the implications of the solutions they are being provided in terms of quality terms seek feedback on the value of the solution provided, and the service quality, through client surveys and web based discussion groups on specified topics relevant to the group. seek options to assist other researchers/ statisticians providing tailored support to this group provide detailed information on the problems, issues and tradeoffs involved in NSI work, to make these transparent to this group of users, and to build the groups understanding of the issues and their statistical implications so that they can be better partners in determining optimum statistical solutions. involve representatives from this group in expert committees, review teams etc to contribute to the development and review of statistics and the balancing of quality tradeoffs. provide measures of quality on all aspects of interest to this group where possible, qualitative as well as quantitative, to support the effective use of the data seek feedback via surveys, on a full range of quality

D: Key Users: Treasury, Bank High level of statistical capability, and interested in all of England, Key Policy Departments, Key International aspects of quality. Agencies Ease of access, and ease of interpretation are not as important as completeness, coherence and accuracy of the information available. As individuals these users will be influential in the decisions taken particulary in the areas of direct collection and measurement, in the portfolio of agency products. They will often be in the best position to judge appropriate quality tradeoffs, and decisions will benefit by involving them directly these, at various stages of the development and implementation of proposals

Conclusion:

issues about key statistics from members of this group seek input via mechanisms such as focussed web based discussion groups chaired by the NSI targetted at members of this segment maintain fora with representatives of this group to identify emerging issues in terms of coverage of the statistical portfolio provide detailed information on the problems, issues and tradeoffs involved in NSI work, to make these transparent to this group of users involve members of the group in expert committees, review teams etc to contribute to the development and review of statistics and the balancing of quality tradeoffs. provide measures of quality on all aspects of interest to this group where possible, qualitative as well as quantitative, to support the effective use of the data seek direct feedback from these users on products and services provided, on a regular basis eg via SLAs, and key client meetings seek regular opportunities for liaising with these users about emerging issues in their programmes that may be indicative of new statistical needs

If we look then at the issue of managing quality in an agency, there are some key steps in achieving this: Understand the users, their needs and how their use of information relates to the role of the agency and feeds into its programme of activity in fulfilling that role. Understand the nature of the NSI's products and services and what the important drivers for the quality of each. Build appropriate relationships with users to help achieve the right quality balances. Make the complexities and methodologies underlying statistics visible to those who can be partners in finding the best solutions, while providing less sophisticated users with more usable information on quality. Build NSI skills to support its expert role in providing a relevant picture of social and economic conditions, identifying emerging areas of interest, and developing the most effective possible statistical solutions. Build NSI capability to identify and cease low priority statistical activity that is diverting resources from higher priorities. Develop a culture for quality in the agency, and a culture of openess and partnership with users in producing best possible solutions, and provide relevant measures of quality to enable users to make effective use of the information. Invest in infrastructure that supports all elements of quality, including coherence and ability to respond quickly to emerging issues. develop the trust and co-operation of respondents through good collection practices and through respecting the confidentiality of their data. guard the reputation of the NSI for the credibility and integrity of its outputs, noting the added value to products that comes through the ready of acceptance of figures as credible, and the loss in value that comes through a downgrading of the overall 'quality rating' given to an agency.

Achieving quality is vital for any organisation, but an appropriate quality balance canot be achieved without understanding the different needs, and the priorities of meeting these different needs, in the organisations goals. cannot be achievedThere is little point in trying t apply thoughtfully to different needs and priorities to meet goals.

CD-15-3

CUSTOMER-DRIVEN QUALITY, REVISITED 1/ Rich Allen National Agricultural Statistics Service USA ABSTRACT In a 1994 paper, prepared for the American Statistical Association Winter Meetings, the National Agricultural Statistics Service used the term customer-driven quality to describe its ongoing communications with data users and other customers. That paper summarized some 40 examples of improving quality through timing, content, or format changes which had been made in the previous 3 years in response to input from customers--without new legislation or funding. This paper discusses the continuing evolution of customer contact mechanisms as the world has largely shifted to electronic communication and cites lessons learned after installing new features. It will provide many new examples of customer-driven quality and highlight two important case studies. One, involving a time of day change for the release of many statistical reports, will document Agency efforts to work with and placate two divergent customer viewpoints. The second will describe procedures to provide immediate release news services improved access to reports while maintaining absolute security. INTRODUCTION This paper has a specifically selected title. Customer-driven refers to the view that quality can only be determined by customers. Data or service providers create what they feel are appropriate products but customers decide if those offerings truly are of good quality. Therefore, quality can best be assured by working closely with and listening to customers to meet their needs. Revisited refers to the fact that a similar paper was presented at a January 1994 American Statistical Association meeting. That identified some 40 specific quality improvements which had been implemented by the U.S. National Agricultural Statistics Service (NASS) in 3 years. This paper takes a fresh look at changes and improvements made from 1994-2000. Information and examples for this paper have been gleaned from ongoing annual summaries and other documentation. __________________ 1/ Prepared for the International Conference on Quality in Official Statistics, May 14, 2001, Stockholm, Sweden. Rich Allen is the Associate Administrator of the National Agricultural Statistics Service (NASS). Views expressed in this paper are those of the author and do not necessarily reflect policies of NASS or the United States Department of Agriculture.

1

For this paper, Customer-Driven Quality is defined as making changes in timing, content, or format due to specific input from customers. As you will see, many improvements were made through “mining” already collected data. All changes to be discussed were publicly instituted. Even though one individual may have suggested the need for a new feature, that individual would not receive any special access or notification before other users. Changes due to new legislation or funding are excluded from this paper. Additionally, although NASS has been a leader in developing user friendly electronic services, those developments have been excluded since they were done from an approach of broad customer service and not in response to specific input. DISCLAIMER This paper should not be construed to imply that NASS is the only organization listening to customers and adapting its services. Instead, it is hoped that the descriptions of customer input mechanisms and examples of changes that have been made will serve to help organizations to examine their own operations. NASS CUSTOMERS NASS issues over 400 statistical reports each year from its Headquarters and its 45 State Statistical Offices have approximately 9,000 additional reports or press releases. Reports cover official crop and livestock forecasts and estimates; prices paid and received by farmers, farm labor, farm numbers, and land values; weather, crop progress, and crop conditions during the growing season; chemical usage and other environmental data; and a wide variety of reimbursable surveys on agricultural related topics. NASS is also responsible for the every 5-year census of agriculture. Thus, major customers are individuals and businesses which use reports for business decisions or as input to analyses as they advise others. A second level of customers is a broad sector of libraries, educational institutions, and individuals who have occasional need for data and refer to NASS published results and data bases. All State Statistical Offices operate through formal cooperative agreements with State Departments of Agriculture and public educational institutions. NASS is the official State agricultural statistics organization in all States as well as being the Federal source. This means that the Commissioner or Director of Agriculture in each State is a prime customer. However, specific actions taken to serve those cooperators have been excluded from this paper. In addition to business and cooperator customers, NASS data are widely used by other Government agencies. NASS also serves as a major data collection source for Government agencies which need new or periodically updated data. Since these data collections are funded through cooperative agreements or contracts, special surveys are excluded from this paper.

CUSTOMER INPUT MECHANISMS 2

Because of the Federal-State relationships and the wide-range of publications, NASS offices constantly get questions and suggestions from customers. Some suggestions do lead to new features but most changes come from more formal exchanges. Data User Meetings: For the past 15 years, NASS has organized and coordinated information exchange meetings. NASS includes other information agencies of the Department (such as the Agricultural Marketing Service and the World Agricultural Outlook Board). Most meetings start with agencies highlighting new reports and features and describing changes in services. The rest of the day is devoted to questions and discussion. A wide variety of inputs are received but most meetings tend to have three or four special timely topics which get considerable attention. Many people attending data users meetings fall into a “power user” category. They know USDA data series very well and use them constantly. Their suggestions are often due to industry structural changes or new technologies that agencies might consider to improve service. Special Focus Data User Meetings: NASS also sponsors meetings to discuss and evaluate specific estimating programs. For example, NASS started collecting chemical use data in 1990 and several on-going environmental related surveys were added during the 1990's. A meeting was held in 2000 to critique the existing data series and to evaluate priorities for program modifications. There was a general consensus that the frequency of covering some commodities could be cut but additional information was needed on target pests and application timing. In another special data user meeting example, a sizable group of individuals interested in Integrated Pest Management (IPM) was invited to a meeting the hour before release of the first NASS IPM report. Confidentiality regulations do not permit anyone to receive copies ahead of release. In this situation, people could review and discuss the report but no one could leave the room until the official release and no communication devices were allowed. This meeting provided immediate feedback on the newly available information and ensured that key data users understood the report explanations. Lock-up Briefings: Several of the most sensitive statistical reports are issued under special lock-up security provisions and signed by the Secretary of Agriculture at release time. Groups of farmers, farm organization leaders, and analysts often attend these releases to see the security measures first hand. There is normally an opportunity for face to face discussions with the visitors in which suggestions might be received on report formats and procedures. Commodity Meetings: Many commodity producer organizations have statistics committees. Some come to NASS Headquarters to discuss the statistics available for their commodity or NASS staff members attend their conferences. These meetings provide a sounding board for new Agency proposals as well as a chance to receive suggestions for improvement in present reports.

3

NASS can devote a certain amount of resources to specific industries and there is some flexibility in terms of allocating those resources. For example, NASS adjusted the months for the first forecasts of potatoes and peaches based on industry requests. However, the Agency first verified there was widespread industry agreement and it was not just reacting to a vocal minority. In addition to meetings with statistics committees, NASS staff members regularly attend national and regional meetings of general farm organizations and commodity groups. Suggestions for improvement or support for present programs also come from those meetings. Customer Service Office: NASS State Statistical Offices have always served as a clearing house for data requests. Similarly, commodity statisticians in Headquarters received many requests but there was no consistent location for general agricultural statistics questions. In May 1995, NASS established a customer service center which featured a toll free information number. To provide true customer service there were key operational principles. 1. Every call during normal business hours would be answered by a person. 2. A customer pledge was adopted - - and followed. 3. People staffing the customer service center received extensive training on data products and report procedures. 4. Commodity specialists were available for answering detailed NASS report questions. 5. If an answer could not be provided immediately because the data source was another agency, NASS would not transfer the call or provide a telephone number. Instead, the NASS staff member would pursue the request and call the requestor back or ensure someone else did respond. There were many lessons learned from the customer service center: 1. Many callers are amazed to talk to a “real person.” 2. Only about half of the calls received are for NASS data. Perhaps 20 percent of the calls are for data that do not exist anywhere. 3. Some individuals call the NASS toll free line asking to be transferred to agencies which do not provide a free service. 4. Most data callers want a quick answer and do not want to take time to provide name and address information for Agency follow-up purposes. 5. Customer service quality follow-up calls need to be within 3 days of the original contact. If a data requestor is called later than that they might not remember the details of the information exchange. 6. A good portion of all questions do repeat themselves so files and notebooks of past information contacts and telephone numbers have been created. The customer service office was started before electronic transmission and Internet access became the norm. Hotline staff members were provided training on Internet access and new electronic services as they became available. There was an evolution within about 3 years of the 4

Hotline’s inception. At first, most individuals could not be on Internet and the telephone at the same time. The Hotline staff provided navigational instructions that needed to be written down on how to get to a data source. Presently, most individuals can have a totally interactive Internet search session with our staff members. With the evolution of Internet electronic access there was a new set of lessons learned: 1. An autofax capability for most short reports meets many data users’ desires. 2. Hotline staff members need detailed training in electronically accessing all Agency offerings - - and those of other commonly requested data sources. 3. It is usually best to navigate a person through the steps to find a data source rather than just providing them with the one or two requested numbers. (Otherwise, they are likely to call back soon for other information from the same source.) 4. The numbers of calls has not declined as more people got Internet access but the type of calls changed. 5. About half as many data requests now come from e-mail compared to telephone--and email requests tend to be more detailed. NASS started its customer service hotline service before it received the responsibility for the periodic agricultural censuses. With the new responsibility, NASS acquired a unit that handled census information requests. When it was possible to collocate staff members, all information assistants have been combined into a Marketing and Information Services Office. The result has been a good atmosphere to expand knowledge about all Agency data products and a larger critical mass of staffing to meet the goal of providing a real person on the line. QUALITY THROUGH NEW REPORTS AND FEATURES NASS normally requires funding to create new reports because of the need for additional list preparation work, questionnaire design, data collection, editing, etc. However, it is often possible to add a few questions to an existing survey to create a new product to meet data user requests. The examples below include one new survey that was possible because the survey population was already known along with several new reports that have been started. Weekly Dairy Products Prices: The weekly Cheese Price Survey is an example of a survey originated because of a request from a very important customer - - the Secretary of Agriculture. Cheese price was a significant factor in the formula which set the price that U.S. farmers received for their milk. The cheese price used was determined by a once a week industry auction. The volume traded was usually less than 1 percent of total cheddar cheese. When cheese prices took a sudden drop, the always sensitive dairy industry became even more contentious with concerns of whether prices could have been manipulated. The Secretary stated that he would have the Department of Agriculture determine the true price and turned to NASS. NASS focused on industry customers in designing the new program. The industry wanted an unbiased price that would accurately track even small price movements and wanted timely 5

information. NASS designed a weekly price series with data available a week later. NASS met with dairy product associations to discuss the program and develop strict specifications. As it turned out, the auction market specifications on product size and on pricing cheese with no packaging or transportation costs were ideal for the new survey. That was a plus since producing plants were familiar with those specifications. There were essentially 200 cheese plants producing unaged cheddar cheese. The smallest 100 plants combined accounted for less than 1 percent of sales. Therefore, those plants were excluded since their actions could not have a significant impact on average price. In order to verify that consistent price data were received, NASS did not publish any reports until 5 weeks of data had been collected. The data were summarized each week with special attention to how quickly reports were available and to the consistency of the weekly reported volumes and prices. All published reports include 5 weeks of information so data going into the monthly average price used in the milk pricing formula are always available. The monthly cheddar cheese price was extremely successful and NASS later received funding to continue the program. In fact, it was so popular that the Agency was funded to establish similar weekly programs for butter, whey, non fat dry milk, and, just recently, milk fat/cream. New Publications: Most NASS State Statistical Offices create an annual bulletin which summarizes all recent agricultural statistics for their State. These bulletins include features such as ranking the importance of various commodities and the top producing counties. Data users often complimented the State compilations and requested a similar national product. A Statistical Highlights publication was created in 1995 and is repeated annually. Starting in 1998, the full publication has been available over the Internet as well as a paper product. In a somewhat related action, an electronic Monthly Ag. Newsletter has been created by NASS. This newsletter summarizes supply and demand data for most major commodities, drawing heavily on publications from other Department of Agriculture agencies. This newsletter is available to everyone but is widely used by State Statistical Offices in preparing newsletters and other reports for their data users and in preparing pre-survey letters. A new customer feature, as of 2001, is a Statistical Program Monthly Highlights summary posted to the NASS Home Page the first week of each month. All new reports, new tables, or changes in format are described for the reports to be issued that month. This feature is particularly appreciated by the frequent user customers. Canadian Livestock Reports: Starting about 1996, many data users requested that NASS provide Canadian livestock estimates because of concerns that U.S. markets were being affected. Also, Canadian data were difficult to obtain since Statistics Canada had a pay for publication policy. NASS officials discussed several alternatives with Statistics Canada for combining reports or repackaging Canadian data. Republication was the most feasible since Canadian annual and midyear Cattle Inventory reports are released 2-3 weeks after corresponding U.S.

6

reports. The first U.S.-Canada Cattle Inventory reports were issued in 1998 with comparable data items for the two countries. Canadian Cattle on Feed and Hogs and Pigs reports are handled by different organizations than Cattle Inventory. Based on the Cattle report success , NASS continued negotiations and was able to add both Cattle on Feed and Hogs and Pigs reports in 2000. A new Canadian data link has been added to the NASS Home Page. Farmers’ Use of Computers: One new interest area in the mid and late 1990's was whether U.S. farmers were using computers. A first time indication was created in 1995, when a question was added to the annual Farm Costs and Returns Survey. That small sample size limited publication to a country wide average. Questions were added to the 1997 NASS June Area Frame Survey which enabled publication of State estimates and national economic class of farm estimates. While some data users requested annual, or even more frequent, updates of computer use, NASS decided to release estimates every other year. The 1999 survey added questions on access to the Internet, because of concerns about the “Digital Divide” limiting rural access. The 2001 survey will again ask about Internet access and add basic questions on use of e-commerce for purchasing inputs or selling products. Genetically Enhanced Seed Varieties: Because of the interest, and controversy, about the use of genetically enhanced seeds, NASS added questions to the 2000 Prospective Plantings and June Acreage Surveys about the use of such varieties for corn, soybeans, and cotton. The June sample size was large enough to present State level estimates for major producing States as well as U.S. estimates. Geo-Spatial Information System Products: Starting in 1994, the NASS Research and Development Division began creating standard data products as part of its ongoing Geo-Spatial Information System (GIS) research. The first products were crop county estimate maps which presented acreage and yield range data. By tying into its remote sensing research, maps were created every 2 weeks of vegetative index readings obtained from the operational weather satellites. The ability to produce such maps had been developed in 1993 in order to evaluate the massive flooding of the primary corn and soybean producing area. NASS combined data base and GIS techniques to create maps which show year to year differences and comparisons between the current year and median results. While no crop acreage or yield estimates are currently feasible from these GIS products, the maps which are now posted to the Internet every 2 weeks do receive a lot of interest.

QUALITY THROUGH TIMING CHANGES

7

While the most noticeable change in NASS report scheduling is changing crop-related releases from 3:00 p.m. to 8:30 a.m., featured elsewhere in this report, there have been other significant adjustments to improve service to data user customers. Adding Soybean Revisions to September Stocks: The normal revision cycle for most spring planted crops is to publish previous year revisions with the current year end-of-season estimates in early January. However, since nearly all soybean disposition can be accounted for by administrative export and crushings data, the need (or lack there of) for revisions of the previous year soybean production estimates is obvious to many data users when the end of the marketing year soybean stocks are published at the end of September. (Data users will not know whether area harvested or yield or both need to be revised but they can roughly estimate the size of the total production revision.) Since acreage and production data are normally published in Crop Production reports, the NASS pattern was to publish previous year soybean revisions in the October Crop Production report, about October 10. However, data users pointed out that such a policy led to speculation and uncertainty about the size of change for nearly 2 weeks. It also meant that data users had to enter previous year revisions into their data bases on the day of the October Crop Production release which delayed interpreting the current year forecast. NASS agreed with the concerns. The Agency now includes previous year soybean acreage, yield, and production revisions in the September Grain Stocks report. The same data table is then repeated in the October Crop Production, for completeness. Revamping Agricultural Statistics: NASS prepares an annual compilation of statistical and administrative data series of many Department of Agriculture and other Government agencies. This was previously known as the Agricultural Statistics hard copy book, although the entire publication is now also available over the Internet. This publication was always submitted to the U.S. Government Printing Office (GPO) for data entry (using their proprietary software). Data had to be transferred to large scale listing sheets and, after data entry, all printed data tables had to be proofed. There was often a 3-4 month gap from submitting files until tables were ready for proofing. In this case, the customers NASS responded to were the data providers from NASS and about 15 other organizations. The providers’ main complaint was the extreme time gap. Data files had been put away and, in many cases, revisions might have been made before proof copies were received. NASS explored various improvement options with GPO. When no normal alternatives were acceptable, NASS insisted that GPO provide the data entry software and training. NASS then performed the data entry and data table creation functions. The time lag was reduced from 3-4 months to 3-4 working days. Data providers were extremely pleased. With the faster turn around, NASS was able to accept data files which were 3 months fresher. It often meant that an extra year’s data could be added. The first such publication was renamed Agricultural Statistics 1995-96, instead of 1995 to illustrate the improvement. 8

Advancing Census Revisions: Following completion of each 5-year census of agriculture, NASS had always reexamined all survey data, administrative data, and estimates since the preceding census in order to publish “final” revisions. The 1992 Census of Agriculture illustrated the normal revision timing. Since that Census related to crops produced in calendar year1992 and livestock inventories at the end of 1992, mail out occurred in December 1992 and most data were collected during1993. Processing and publication took until September 1994. NASS then conducted its review and published revisions by the end of 1994. Thus, preliminary 1992 estimates were revised twice - - at the end of 1993 and again in 1994. When NASS received the 1997 Census of Agriculture responsibility, it set an immediate goal of completing the review and publication much quicker - - February 1, 1999 instead of September. However, it assumed the detailed revision review would be after the official publication. Data users asked NASS to reconsider. Their logic was that NASS surely had a good knowledge ahead of time of the census results. NASS might be able to publish revisions before the Census release and save data users from entering preliminary 1997 revisions with the end of year 1998 estimates and then reentering final1997 revisions within a few months. There were logistic concerns about completing revision reviews and the final census data review simultaneously. Also, there was apprehension that releasing revisions before the 1998 final estimates might impact markets if revisions implied that significant changes might be needed in the 1998 levels. However, NASS did proceed with the historic revisions based on unreleased preliminary census levels. All revisions were issued at least 3 to 10 working days before the corresponding end of 1998 data series. Figure 1 illustrates the changes in the two cycles.

9