Quality measuring with respect to electronic information markets and particularly online databases M. Rittberger Universitat ¨ Konstanz Informationswissenschaft Fach D87 D–78457 Konstanz Tel: +49–7531–883595 email:
[email protected]
Abstract The online market and the provision of technical information have undergone fundamental changes as a result of the growth of the Internet. Electronic marketplaces for technical information thereby represent a good opportunity for exchanging technical information. Nineteen criteria for the evaluation of technical information markets are presented in this article, and methods of operationalization are discussed. Databases are an important component of technical information markets. Quality criteria for the production of bibliographical databases are given in more detail. 1 Introduction Major changes have been taking place in the commercial online market for technical information as a result of the expansion of the Internet. Whereas previously the market was dominated by a few larger hosts offering a broad range of disciplines, today companies that were previously only content producers also distribute their products in the online market. This trend was made possible by the Internet and the available database technology. The producers of databases once marketed their products either in print or CD-ROM versions, and online versions were offered by the online hosts. Online databases are now increasingly being offered, supplemented by other services, on the Internet in the frame of technical electronic marketplaces. Electronic marketplaces offer contents optimized for specialized publics which greatly exceed the services previously offered online. Electronic marketplaces are produced and offered on the WWW by database producers, publishers, associations or groups in connection and cooperation with other institutions. This can range from simple cooperation to improve document provision up to comprehensive information services for specific target groups, e.g., physicians or agronomists. First steps to electronic marketplaces are done by the traditional online hosts in improving the access to their wide palette of information offerings on the WWW with arrangements for simpler access and optimized user interfaces, for example, DIALOG-Web or STN-Easy. Electronic marketplaces should offer information and products designed for the specific interests of technical publics. Studies
by the VDI1 and the DIPF2 [1, 2] show that literature information does not constitute as large a share of information need as is often supposed. Therefore more attention will be given to the offerings of other types of information in electronic marketplaces, for example, information on addresses, projects, experts or specific technical processes and developments, and norms. Also important are analyzed or synthesized information, such as, e.g., summaries or translations of research results. The concept of the electronic market can be traced back to work by [3, 4, 5], who offer a more technological view of electronic markets. They deal with the added value achieved by the use of information technology for information services such as those resulting from integrated electronic reservation systems in the tourism industry. Listed as attributes of electronic marketplaces are the independence from time and place provided by telecommunications, the simplification of procedures for obtaining information on the participants in the electronic market and the goods offered, as well as reduced transaction costs [6]. Electronic information markets serve as information brokering systems in the interest of the economy (providers and users) and the public, to satisfy information needs, and are thus components of the information infrastructure, as are hosts, libraries or public administrative information services [7]. The differentiation of electronic markets on the basis of geographic features (international, national, regional) was initially justified by the development of the World Wide Web, since the identification of the traders in the World Wide Web with the regions they represented was at first very high3. It is now being superseded by content-based classifications which permit users of the WWW to search for information on the basis of content-oriented criteria. The pioneers in this trend are the WWW catalogues, for example, Yahoo4, WWW Virtual Library5 or Lycos6, which have introduced a hierarchical order into the chaotic growth of the Internet. They still operate, however, without essential, transparent quality criteria which would offer the user reliability in regard, for example, to the contents of the individual sources or range of offerings. Publishers, database producers or scientific organizations offer an appropriate palette of information services in technical electronic markets. Naturally a large share of these offerings are based on data collections assembled using the highest-possible standards and offered on the WWW. Examples of this are, e.g., the Engineering Information Village7 of Engineering Information, Inc. in the engineering field, or the DAINet8 of ZADI9, which offers information on the agricultural sector. For these providers, ’conventional’ online databases like Compendex or Agris play an important role in information provision, alongside other information collections. 1 2
Verein Deutscher Ingenieure, www.vdi.de Deutsches Institut fur ¨ Forschung, www.dipf.de ¨ Internationale Padagogische
3
It is also mirrored in the categorization (subdivision, classification) of domain names by country, which is gradually being replaced by a different categorization system. 4 www.yahoo.com 5
http://www.w3.org/pub/DataSources/bySubject/
6
http://www.lycos.com/ www.ei.org/eihomepage/village/intro.html
7 8 9
Deutsches AgrarInformationsNetz,www.dainet.de Zentralstelle f¨ur AgrarDokumentation und –Information, www.zadi.de
After discussing the concept of quality in relation to electronic information markets, we will propose a system for ranking electronic marketplaces, in which online databases play a very important role. We will discuss in depth the quality aspects of the production of online databases10. We will present as an example reference databases with literature references, also called literature or bibliographic databases. Most of our conclusions in this part can, however, also be applied to other information sources similar to bibliographic databases. 2 Quality of technical electronic marketplaces In the relevant literature one finds a large variety of different descriptions, definitions and notions for the concept of quality [9, 10]. In the narrower environment of information science, [11] states that "quality is, like ethics, situational - at least in my universe - and I suspect in that of most search professionals." Arnold believes that "quality is electronic publishing’s golden idol," and for him quality is a question with many answers: "Toyota Motors defined quality as products that conform to the specifications" [12]. [13] creates criteria for information quality in regard to spoken information, written information, graphic information, multimedia information and criteria for a general standard for information. From the user’s viewpoint, important criteria have been developed, such as accuracy, comprehensiveness, up-to-dateness, reliability and validity [14, 15, 16]. A comparative study of the development of general criteria for the quality of information was made by [17]. [18], by contrast, developed, using a survey of searchers in the WWW, seven different quality criteria for evaluation. Not only the system, but also the user perspective for the evaluation of search systems is described in detail by [19]. A general overview of the assessment of information is offered by [20]. In contrast to traditional online databases, electronic technical markets offer a much broader information service, and a high number of additional interaction and presentation techniques. To better evaluate such markets requires employing the individual components from which a technical market is constituted to evaluate the total quality. The basic functions on general electronic marketplaces are information, presentation, communication and transaction [21], whereas [22], based on [6], classifies the phases on electronic markets to information, agreement, settlement and after-sales. A more user-oriented view for the evaluation of technical electronic markets is based on the categories content, presentation , interaction, system and provider. There is a strong relation between content, which become information, if it is needed and used, and presentation, which describes the way content is offered: The data content which is available in the electronic technical market: The conventional attributes of online databases are significant at the content level, since a not inconsiderable number of technical electronic marketplaces are based on these collections. A major role is thereby played by the scope and coverage, comprehensiveness, currency and timeliness, accuracy and consistency of the content. Besides the information and databases 10
This section is based largely on findings presented in [8].
offered by the provider of a technical electronic marketplace, the offering is supplemented by large quantity of data content which is not directly controlled by the providers. This contents are collected and organized by the operators and are not necessarily rechecked for their accuracy and other quality attributes. The above-named criteria can only be guaranteed for this content domain if the provider chooses to introduce control mechanisms which can check the quality of the content. For example, it may suffice to have an indication of whether the content of the external provider is regularly serviced, whether it is up-to-date, etc. The presentation level which describes the stylistic means employed for the presentation of content: Basic design principles [23, 24] should be used in designing user interfaces, e.g., individual pages should not be overloaded, the pages of an electronic marketplace should be set up on the basis of uniform standards; uniform metaphors and graspable, graphic symbols (icons) should be used11. Progressive systems should be able to react to user behavior and thereby be able to specifically coordinate and adapt presentation and interaction in the specialized electronic marketplace for users or user-groups12. The interaction level with whose aid content can be found by means of navigational exploration or searching: It is of especial significance for navigation in hypertext structures to have access to suitable navigation aids such as graphical overviews, paths and guided tours which make orientation in the system easier and thus increase effectiveness in navigation [28, 29, 30]. To enhance navigational searches in electronic marketplaces, efficient and effective access to information by means of search and retrieval procedures must be made possible. Retrieval problems in electronic marketplaces as on the Internet occur in resource selection [31, 32, 33, 34, 35], searching [36, 37, 38, 39, 40, 41, 42, 43] and visualization of retrieved results [44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54]. Further interactive elements could be discussion forums or bulletin boards or transactions. The system in which data content is deposited and with which the hypertext structure is managed: Consistent data management is of enormous importance. The technique of setting up file-based WWW hypertexts, which is often employed, is unacceptable for a professional environment. Changes in the uniform basic design of WWW pages, the management of links and units can no longer be made without appropriate database support. A further essential success factor for an electronic marketplace is based on structuring possibilities typical of hypertext. First, in WWW hypertexts a large collection of information can be arranged and stored on the basis of different criteria or classifications (e.g., alphabetical, chronological, content, geographic). Integration into a hypertext structure based on these classifications is an essential contribution making the relevant information accessible. Second, every information offering should be multiply integrated in a hypertext to permit access to the information in various contexts. Reaching any information page should require only the minimum number of steps from an entry page. 11
Style Guides help creating better human-computer interfaces, e.g [25, 26, 27]. A good starting point is the homepage by J. Nielsen and his Alertbox (http://www.useit.com/alertbox/). 12 For example, a metaphor which is accepted in Europe may be unacceptable in Asia.
Metrical procedures are suitable to measure these dimensions, in order to determine the density and compactness of a hypertext [55]. The provider level, which is responsible for the choice of information and services to be offered on the electronic market. This is where decisions are made about the manner of providing customer service, marketing information, integrating advertising partners and similar matters. Business policy is set on the provider level, and the importance of success factors for the electronic marketplace is determined there. In contrast to the systems familiar from online databases, in this model particular value is attached to the access level, in order to take into account the additional means which are present for processing information in a hypertext system such as the WWW. An evaluation of specialized electronic marketplaces can only be made to the extent that evaluative criteria are created for the five above-named domains. The weighting of individual criteria in their domain and the influence of the domains on the overall quality which is to be specified must be oriented to user needs. There are only a few serious works on this topic area, despite the great number of relatively unorganized criteria collections for the evaluation of electronic marketplaces13. An index number system for electronic marketplaces was set up in the frame of a first approach to operationalize possible criteria [56]. Sixty criteria were identified from the technical areas of database quality, service quality, electronic markets, hypertext systems and WWW sites14, cost management, product quality, software ergonomics and software quality15 [57, 58, 59, 60, 61, 62, 63, 64, 65, 8, 66, 67]. The interdependencies of the criteria were worked out and the relationships between the criteria were deduced by means of empirical-theoretical statements or empirical-inductive procedures16. The 60 criteria were arranged in an index number system with a top index number and 19 main index numbers and evaluated using two specialized marketplaces in the agricultural domain17. The evaluation produced 11 further criteria which were built into the index number system. The following section describes the 19 main index numbers which are directly subordinated to the top index number18 1. Accessibility indicates how easy it is to locate an electronic marketplace. Different forms of accessibility play a role, for example, the domain name, robots or catalogues on the WWW or entries in other Web sites. 2. Accuracy means the care with which information units are introduced into the electronic marketplace. It means the avoidance of errors in all stages 13
Usually efforts are made to find points for the evaluation of Web sites, while quality criteria for electronic markets are sought less often.
14
Website Analysis and MeasureMent Inventory (WAMMI, http://www.ucc.ie/hfrg/questionnaires/wammi/index.html)
15
Software Usability Measurement Inventory (SUMI, http://www.ucc.ie/hfrg/questionnaires/sumi/whatis.htm)
16
Besides the named procedures, logical derivation and model-supported derivations [68] can be used to develop an index number system.
17
The already mentioned DAINet (www.dainet.de) of the ZADI and the privately organized AGRAR.de (www.agrar.de).
18
A complete description of all index numbers would go beyond the frame of this work. One is found in [56].
of creating an information unit attention to quality standards in implementing and presenting information in the electronic information market: a. in document analysis; b. during entry in the data fields; c. and orthographical errors. 3. Consistency is uniformity and agreement in the processing of all information units. In order to fulfill the requirements for a high level of consistency, strict compilance with rules and working instructions is necessary: a. in the choice of information units and information resources (scanning); b. in classification and indexing (e.g., classificatory schema, thesaurus, indexing rules); c. in cataloguing (e.g., cataloguing rules, category schema). 4. The processing of information contents is the analysis or synthesis of information in regard to its content, e.g., by means of summaries. 5. Currency and timeliness describes the time period between the publication of a text (publication date) and the appearance of the information unit of this publication in an electronic marketplace. 6. Data protection is an important quality criterion in regard to, e.g., user profiles to be created. User profiles may be necessary to give excellent support to the initatives which individual users carry out in the electronic marketplace. It appears reasonable that the provider of an electronic marketplace will publish information about the use of personal data for the marketplace. 7. Design and aesthetic information processing is presentation from an functional and aesthetic viewpoint, as for example, the readability of information. 8. Efficiency and effectiveness are interpreted in the frame of an approach taken from information retrieval. In the case of efficiency, the effort involved on the part of information searchers and the costs of the information search should be kept as low as possible. 9. In this context effectiveness means an information system which shows as nearly as possible the total amount of relevant information, whereby the share of information indicated to be non-relevant should be kept as low as possible. 10. Innovation is a criterion in regard to the technique employed, the organization and structuring of an electronic marketplace and the possibilities for users to participate interactively in the information processing procedure. 11. Privacy means authentification (preventing unauthorized use), integrity and confidence which are quite important areas for an electronic marketplace. Privacy is especially significant in performing transactions on the web. 12. Relevance designates the information units which meet an information need and thus contextually correspond to the inquiry. The evaluation of relevance is directly dependent on the scope and coverage of the subject area of the technical market. By scope we mean the subject-related contents of an area which is touched by the electronic market. All the relevant information - , i.e., all information units classifiable as dealing with the subject matter of an area - must be described in their full extent. The area can be subject-matter oriented, multi-disciplinary, and also task-oriented. Geographic location, linguistic region and time period are further criteria of coverage.
13. The quality of resources must be evaluated on the basis of the same criteria applied to the content of the electronic marketplace itself. Evaluation can thus be quite extensive. Quality of resources can give information about the overall quality of the marketplace, which should guarantee a minimum quality level for the information resources used and offered. 14. Additional services are offered in an electronic marketplace which are not part of the basic services of electronic marketplace, as, for example, translation services, electronic document delivery, etc. In order to evaluate these services, account must be taken not only of the criterion of availability on the electronic marketplace, but also of the quality of the individual services. 15. Standards are used to unify and impose norms. It is necessary for this purpose to create rules or guidelines for individual work procedures which can be taken either from internal documentation or general norms (e.g., ISO norms), (see the section on the production of online databases). 16. Transactions make it possible to achieve contractual agreements on the exchange of goods and/or services. We distinguish thereby between information phase, agreement phase and negotiation phase. In particular, electronic markets which sell technical information are suitable for a thorough support of all transaction phases, since the products in such a marketplace are available in digital form. 17. Truth and correctness of information can only be tested with a considerable expenditure of effort. In the scientific system or in the case of patent information there are relatively good functional quality measurement procedures which also take into account the truth value of information. Not all the documents in an electronic marketplace are subjected to such an extensive evaluative procedure. Thus truth value can usually only be tested through comparison with other sources or media, which constitutes an editorial recheck. 18. Trustworthiness is a basic requirement for electronic marketplaces. Information goods are in principle ones for which trust and experience are essential (buyers usually do not see the product before purchasing it). It is to be noted that the weighting of the current quality attributes of service companies, such as material, reliability, helpfulness, competence, anticipation of needs, trustworthiness, security, accessibility, communication and customer understanding [69] tends to move away from material criteria toward soft criteria such as accessibility, communication and also trustworthiness. 19. Validity: Information is always relevant to action and can thus never be absolutely valid. It is related to the addressee, dependent on the reception, should be new information, should take into account the context and is thus usually time-dependent and goal-oriented. In an electronic marketplace, for example, the authors of information units and the dates of production must be given. In Table 1 the 19 criteria are classified in the five domains of content level, system level, interaction level, presentation level and provider level. Most of the main index numbers are relevant on the content and provider level. This explains the wideness of quality aspects in these levels, but says nothing about the importance for an individual user. Table 1 shows that most of the criteria are
Content
Presentation
Interaction
System
Provider
Accessibility Accuracy Consistency Content Information Currency and Timeliness Data Protection Design and Aesthetic Information Efficiency Effectiveness Innovation Privacy Relevance Resource Services Standards Transaction Truth/Correctness Trustworthiness Validity Table 1: The 19 main index numbers are evenly distributed on the different levels.
relevant to at least two levels, but four criteria (accuracy, relevance, resource, truth/correctness) are especially relevant to the content level. The criteria must be weighted according to their overall importance and the importance for a special user group (e.g. scientists). Three types of operation of the criteria can be made: Evaluation of the presence or absence of an attribute (e.g.: Is there sufficient information about the provider of the electronic marketplace ?) Evaluation with calculated dimensions (e.g., evaluation of the density of links in an electronic marketplace) Evaluation of an attribute on the basis of a subjective evaluation (e.g., trustworthiness) In the first case the evaluation is mapped to the values 0 and 1, in the second and the third, a unipolar scale with equally large intervals is employed. Positive evaluations are thereby closer to 1, whereas negative evaluations tend toward 0.
An essential, and thereby important, criterion for the quality of an electronic marketplace is the accessibility of high quality databases. We focus on online databases, which are often available in technical information markets. We thereby limit our consideration to bibliographical databases and the production of these databases. Other important attributes, the search and retrieval system or added services, e.g., document delivery service, links to author homepages, are the responsibility of electronic marketplaces and are not dealt with here. Various authors have described quality attributes and criteria for the production of databases and given details on not only individual parts of an information unit, but also on individual work steps [70, 71, 72, 73, 74]. Despite the range of interpretations of quality by database producers [75, 76, 77, 78, 79], the literature shows that over the last few years there have been increasing calls for higher standards regarding the quality of various attributes and aspects of information services. The transition from printed sources containing a few thousand information units to electronic manipulation of hundred-thousands or millions of information units in a database and the direct use by information services and end-users have created a new situation which gives users the opportunity to obtain direct influence on database producers and database providers. But even today it can still be maintained that there is no comprehensive and objective concept for the quality of databases, and that the major unsolved problem in regard to the quality of information services consists in the development of usable performance criteria [80]. Numerous criteria are listed in [81], whereas [82, 14] look at the quality dimensions of data from the more technical view of the management of databases or in terms of total quality management (TQM) [83]. End-users, and especially information brokers, have increasingly defined their requirements in terms of the services which they expect from information providers [84, 85, 86, 87, 88, 89, 90, 91, 92]. In an opinion survey of European information specialists from 12 different countries, Wilson asked the specialists to rank ten quality criteria for databases – which were selected from SCOUG 1990 (see [93]) – [94]. He found the following rank order, based on the significance of the criteria: coverage, accessibility, timeliness, consistency, accuracy, value, documentation, harmonization, output, support. A very detailed discussion of quality in database production is given in [95]. The summary of quality requirements, attributes and criteria contains demands not only the database producer, but also the host, although users do not wish to accept this distinction [71]. From the above discussion, the five most significant quality requirements are scope and coverage of the subject area, conprehensiveness, currency and timeliness, accuracy and consistency. Besides comprehensiveness, the other requirements have been discussed in detail above. Comprehensiveness for database production means the inclusion and presence of a variety of documents (publications): monographs, chapters and articles in monographs; journals; journal articles; reports; articles and/or chapters from reports; conference papers, conference proceedings; grey literature; dissertations; patents; norms. Comprehensiveness can be international or limited on the basis of geographic, temporal or linguistic viewpoints (e.g., dissertations only from the English-speaking world).
In the next section we will develop a model for the "quality profile" of databases which presents qualitative and quantitative statements on "quality indicators" for the individual parts and elements of an information unit or database. This model will be oriented towards the production of databases. To illustrate and clarify these concepts, using various tables we will provide quantitative and qualitative details of two hypothetical databases (db1 and db2). This will show the different possibilities for the analysis of data – both formally and content based – as well as for the document-types. In addition to this, where possible the connection will be shown between user requirements and quality, as well as the relationships to the information chain. 3 Production of Databases In the following discussion we consider in greater detail the production process for a database and distinguish three production steps, the acquisition of the original document, the analysis of the document (selection, subject analysis, bibliographic analysis) and the data recording and production system. 3.1 Acquisition of the original document The acquisition of a document requires three steps: Discovering and monitoring the publication and offering of literature; preselection of the relevant sources; actual acquisition. Literature monitoring and scanning require, on the one hand, subject knowledge, in order to be able to determine the relevance of a source. The actual acquisition, on the other hand, requires documentary or bibliographic knowledge, in order to assure that relevant available documents are identified, and that selected documents will be promptly ordered and delivered. Table 2 shows the various ways of procuring a document for a database. It gives information on whether a document still has to be obtained conventionally, or whether "half-ready products" or even analyzed documents can already be delivered. The percentage figures for the two databases (db1 and db2) are thus a measure for the speed with which documents can be brought into the production process. As in the case of db2, higher percentages for the delivery of galley-proofed and analyzed documents are thus currency indicators. A high share of conventionally procured documents (e.g., through exchange), as with db1, suggests a lower degree of currency for the database. The methods of acquisition for db1 and db2 as described in Table 2 are not yet satisfactory. Table 2 shows that 80% of the documents are delivered conventionally, for db2 40% of the documents are already analyzed by the publisher and directly transferred to the database, which indicates high accuracy and authenticity. With ongoing change from conventional acquisition (i.e., with the database producers themselves doing the entire process of document analysis) to delivery of bibliographic and content analyzed documents, a quicker and more efficient 19
Percentage-shares are given as the percentage of the given value for the production of a database for a certain period of time (e.g. a year).
%-Share19 Type of Procurement
db1
db2
Conventional, acquisition by purchase, exchange or charge-free delivery of single orders for the sources
55%
15%
Preordering of document series for the sources
25%
15%
Direct submission of documents by the publishers on the basis of specific agreements
15%
15%
Direct submission of galley-proofs of documents by the publishers on the basis of specific agreements
5%
15%
Direct submission of analyzed documents by the publishers on the basis of specific agreements (e.g., as worksheets, machine-readable texts)
-
40%
Table 2: Forms of acquisition for bibliographic databases.
procurement of documents for the integration of such literature into databases could be achieved. 3.2 Document Analysis The task of document analysis consists in making an accurate and comprehensive description of the original document. For this, clear and unambiguous methods of subject analysis and bibliographic processing are needed, based as much as possible on rules, guidelines, norms, manuals, etc. In addition, it is necessary to have an accurate and consistent description of the contents of the formal structures and physical characteristics of the data sets. Selection of Documents The selection of a document involves deciding whether a specific document (publication) should be considered for processing and inclusion in a database. The database producer who assumes responsibility for production must give a clear statement on selection within the database policy to allow the customer an exact overview of the subject-related and bibliographic content of the database. For the selection, clear and unambiguous guidelines must be defined to determine database content, document types, delimitations, and other selection criteria. Also, tools like subject classification schemes, thesauri, keyword lists, type of document lists, category schemes, etc. must be employed. Based on these guidelines and the subject-related tools, the scope and coverage of the database is determined,. Other guidelines and tools specify the type of documents to be treated and give answers as to the comprehensiveness of the database. Still
other guidelines determine the limits of a database in respect to the geographical area, language and other specific elements. Percentages, as numerical values, are indicators which permit an overview of the distribution. Besides the indicators for the evaluation of a database named in Tables 2 and 3, a further indicator is the number of documents present after acquisition and selection in relation to the sum of all possible documents. The number of possible documents can, of course, only be estimated. db1
db2
% - Share
% - Share
80%
100%
Subareas: A - general literature B - hardware C - comp. sys. org. D - software E - data F - theory of comp G - math. of comp. H - inf. sys. I - comp. methodologies J - comp. applications K - comp. milieux
15% 10% 40% 20% 10% 5%
5% 10% 15% 10% 5% 5% 5% 10% 15% 10% 10%
Type of Publication: Journal Article Book Report Grey Literature Dissertation Patent Norm Conference Contribution
35% 15% 30% 10% 1% 9%
65% 5% 2% 1% 2% 5% 20%
Attributes
Attributes
Selection Criteria Subject Area: (e.g. ACM-Classification)
Selection Criteria Delimitation: Geographical Time Period Language
EU countries Last Five Years EU Languages
International None All Languages
Availability of the Original
Yes
No
Processing Priority
Books
Core Journals
Table 3: Specific selection criteria for the choice of literature.
Table 3 lists the elements which are to be considered in deciding what to include
in databases db1 and db220. The columns of db1 and of db2 indicate what subject classification areas, what document types and which delimitations are used. They give the distribution in percentages: they show, for example, that database db1 is smaller than db2, because fewer subareas of the subject field were included; db1 includes above all books, reports and grey literature, while db2 is a database chiefly containing journal articles and conference papers. This information clearly relates to the completeness of a database. More types of publications were included in db2, even though the focus is on journals and conference reports. In db1, by contrast, the nonconventional publications of grey literature and reports play a more important role. Further important information concerning comprehensiveness can be inferred by studying the delimitations of a database. Included in db1 are documents which were published in EU countries over the last five years, while db2 includes journal articles and conference papers which were produced internationally. Inferences can be made from processing priority about the currency and topicality of the contents of a database. We can assume from the preference of db2 for journals, in contrast to db1’s favoring of books, that the contents of db2 are more current and topical than those of db1, even if reports and grey literature are included in it, which are not highly valued in the processing priority of db1. Db1 has a more specialized coverage, holds a high percentage of nonconventional literature and covers a smaller regional scope. Db2 though is more international and has a wide content-related range. It contains many journals and conference publications which are readily available, but only a small number of other document types. Thus both databases are incomplete in certain fields. Subject Analysis Subject analysis serves as a means for the description of a publication’s scientific contents. The nine points listed for the subject analysis, see Table 4, essentially specify the scientific contents and thus the value of a database. The number of points dealt with establishes the breadth of an analysis, and individual numerical values and percentages suggest the worth and depth of the evaluation. But surprisingly, despite its significance in the productive process, subject analysis was not listed in the enumerations of user requirements [93, 94]. Table 4 summarizes the steps involved in subject analysis. They include abstracting, classifying, various possibilities for indexing, and further elements such as main keywords, data identification, title specification and indexing for special areas. The columns of db1 and of db2 show, for example, which steps were carried out for the two databases and give typical values for these steps. Db1 and db2 differ strongly in subject analysis. In db1, abstracts are taken from the original, while in db2 new abstracts are composed. The take over of abstracts in db1 makes possible a quicker processing of documents and therefore contributes to the timeliness of the database, while, in contrast, the composition of abstracts in db2 increases the consistency of the abstracts, as uniform standards are employed for the creation of abstracts, as, for instance, the 20
For an overview of the number of documents procured within a certain period not only in terms of contents, - e.g., with a classification according to the chief groups of a classificatory schema - but as well in terms of document type, absolute values can also be given for the selection criteria in Table 3.
db1
db2
%-Share or Num. Values
%-Share or Num. Values
On the basis of the Original Document
90%
100%
Abstract: Creation Inclusion (Take over) Improvement Translation
30% 70% 30% 60%
90% 10% 20% 35%
157 1.2
570 2.4
-
10
-
1.7
-
20,000 11.8
-
-
5.5
-
Main Keywords and Qualifier Pairs (M-Q): Number of Terms per Information Unit
-
2.4
Data Identification (Data Flagging and Tagging)
-
-
10%
-
-
100%
Subject Analysis
Subject Classification: Total Number of Terms within a Classification Scheme (e.g. [96]) Number of Terms per Information Unit Type of Content: Total Number of all Codes within a Type of Content Scheme Number of Codes per Information Unit Indexing: Thesaurus: Total Number of Available Descriptors Number of Descriptors per Information Unit Controlled Vocabulary: Total Number of Available Descriptors Number of Descriptors per Information Unit Supplementary Keywords Number of Descriptors per Information Unit
Title Augmentation Indexing for Special Areas (e.g., chemistry, astronomy)
Table 4: Values given for the subject analysis of a bibliographic database.
’Instructions for submitting abstracts’ [97]. In db1 only supplementary keywords are given and a title specification is made to better identify the contents, while in db2 the type of contents (e.g., experimental, theoretical, etc.) is established, a thesaurus is used, and 11.8 descriptors are assigned per document. Such an extensive analysis means an enormous advantage for users of db2 in a subsequent search. For the evaluation of subject analysis, it is also necessary to know which rules, guidelines, norms, classifications, thesauri and manuals are available for the
preparation of the different elements, and what competence they have, not only on the internal organizational level, but also on the national or international levels. The following enumeration gives examples of (a) internal organizational, (b) national or (c) international instruments of this sort: Abstracting and Indexing: a. Manual for subject indexing [98]; b. JICST Thesaurus English Version [99]; c. Instructions for submitting abstracts [97]; Classification: c. Subject categories and scope description [96] Statements of which rules, etc., are employed for subject analysis are likewise quality indicators for the evaluation of an information unit or database. Subject analysis has great influence on the information chain, since its excellence and comprehensiveness strongly affect the relevance of search results. Bibliographic Analysis Bibliographic analysis contributes to the description of the formal elements of a publication or of an information unit. Table 5 includes the key elements which are drawn on in processing21. They include document types, author, title, publisher data, conference elements and further specific elements, as for example the International Standard Serial Number (ISSN) for journals, report numbers and corporate bodies for reports or the International Patent Classification (IPC) for patents. Likewise with subject analysis it is established here what rules, etc. were used for inclusion and what competence they have (a) internally, (b) nationally and (c) internationally. Examples are: Cataloguing: b. Guidelines for the cataloguing of documents [100] Country codes: c. Codes for the representation of names of countries [101] c. Terminology and codes for countries and international organizations [102]; Journal Title: a. List of journals and serial publications [103]; The percentages and numerical values given in Table 5 show the extent to which the requirements were fulfilled. As before the two databases differ thoroughly. Thus, for example, only the first author of a publication is listed in db1, whereas all authors are named in db2. Failure to include all authors is naturally an indication of the precision of the database, since the document is not completely described. The weight which should be assigned to this inadequacy when evaluating a database depends on whether it is common in a specialized area for several or even many authors 21
There can be further database elements, which are necessary to fulfill the goals the database producer wants to achieve (e.g. citation data, URL, pricing information, physical properties, etc.).
db1
db2
%-Share or Num. Value
%-Share or Num. Values
70% 30% 100%
10% 90% 100%
Authors
1
all
Affiliation
-
30%
Country of Affiliation
-
100%
max. 3
all
60%
>90%
-
100%
70%
80%
-
60%
10%
-
Contract - Number
-
-
Conference Elements: Title of Conference Place of Conference Date of Conference
-
100% 100% 100%
100% 50% 60% 60%
100% 100% 100% 100%
Data Element Title of Publication: Original English Carrier Language of the Database
Collaborators Editors Publication Date Place of Publication Collation Original Language Availability Note
Type of Document: e.g. Journal: Title ISSN CODEN Date of Publication Collation: Volume and Number Table 5: Elements of a bibliographic description.
to publish jointly. While conferences are not included in db1, conference titles, locations and dates are given in db2. These details are helpful in identifying conference publications (Table 3 and 5) and present an informational value of their own for conferences. In db2, affiliation is listed, an indication which also has increasing significance for users. The type of documents used in the database may be mentioned. for examplw, both db1 and db2 include journals, but the description of journals in db2 is more comprehensive than in db1. The complete and accurate inclusion of all formal attributes helps the user in
selecting a document in a larger document collection and thereby influences the quality of the retrieval and its results. The different elements of a bibliographic analysis are needed for the further links within the information chain. For example, the greatest degree of fanning of the information unit during bibliographic analysis is useful in database design, in order to improve retrieval possibilities. Aside from the selection of the subject, bibliographic data are necessary in information use for limiting the formal level e.g., limiting the selection to information on patents obtained after 1987. Further, bibliographically error-free processed data are assuming increasing significance for information users, since the automation of document ordering and delivery requires correct data [104, 105]. As well in regard to data exchange in international networks like the Internet, highly accurate bibliographic data is increasingly needed, which must be produced according to international standards. In regard to the aforementioned quality requirements of users, accuracy and consistency play an especially great role in bibliographic analysis. The accurate, correct and consistent application of rules and the accurate, correct and consistent production of data elements can greatly increase not only the accuracy of a database, but also the consistency of its data. Furthermore, a contribution can be made to the currency and high-speed processing of a dabatase, since through the avoidance of errors at this production stage, expensive and time-consuming correction at a later date becomes unnecessary. 3.3 Data Recording and Production System Computer-supported production methods are being increasingly employed in the production of databases – especially because of the rapidly growing volume of available data. COMPINDAS (COMputer-supported and INtegrated DAtabase production System) [74], e.g., uses in all production phases, computer-supported methods for the production of databases. COMPINDAS includes functions for the acquisition and analysis of documents, the employment of reference data, and the statistical evaluation and creation of machine-readable end products. The COMPINDAS data-recording scheme makes possible a very specific and detailed entry and structuring of data elements. For the entry of data, a comprehensive character set exists with which special symbols and formulas can be represented. Autonomous systems like METAL (Machine Evaluation and Translation of natural Language) [106], AIR (Automated Indexing and Retrieval) [107, 108] and Kurzweil Discover 7320 [109] support the production process. Errors and their avoidance (see also [77]) play a major role in automatic procedures, since through consistent, automatic checking of the entered data, a subsequent correction procedure can be avoided. The error rate - number of errors per 1000 entered symbols or number of errors in a specific data field can be used as a statement for a quality evaluation. Testing routines and reference data are employed in the production process. According to [110], five types of tests can be made: Consistency Test: The included data are compared with standardized lists using a text-analysis procedure.
Plausibility Test: Which predefined rules must be followed in an information unit is set down in a matrix of elements dependent on the type of document. The absence of fields or errors in the dependency of data fields is indicated. Syntax Test: This is made in order to be able to further process data with defined formats. Errors can be avoided through the greatest possible fanning of the data elements. Duplication Test: This test ensures that there are no double entries, and connections between individual entries are also indicated. Creation of Registers: Data elements are summarized in registers in order to detect irregularities through the use of structured overviews. In the production process, the extent to which reference files are employed is also a measure of judgment. Typical reference files for creating a bibliographic database include information about thesauri (e.g. [111]), classifications, authors (e.g. AACR2 [112]), institutes and affiliations, conferences, journal titles and abbreviations, countries and country codes, locations (e.g. [113]), language designations, dictionaries and character sets. Of the different quality requirements, accuracy and consistency are especially important in the use of a data recording and production system. The numerical values for error rates, which testing routines are employed, as well as which reference data are employed with what competence (internal, national, international) are indicators of the quality of the production process. Essential for file building and for the adjacent links in the information chain are knowledge of the fanning, the structuring and formatting of the data elements and of the character set employed, just as high consistency and accuracy and great reliability in data sets and data per se simplify the production of databases. 4 Concluding Remarks Electronic marketplaces, which offer relevant information in a technically limited subject spectrum, are increasingly important in the Internet. Their feasibility depends on how far they are able to provide customers with the information they need in the marketplace. In our contribution we have described the relevant quality attributes of technical electronic markets. Using online databases as examples, we have discussed the quality issue on a very specific level and thereby shown the great effort necessary to develop a high quality product. On the basis of the user type, it must be decided in the individual case what procedures, for example, content analysis, are employed and whether all the documents in a database are processed with the same degree of thoroughness [114]. Also, for many other quality criteria for electronic marketplaces, additional studies are needed in order to determine the relevance and importance of the criteria in the overall context of the marketplace and its users. References [1] S. Grossmann and M. Rittberger, Elektronischer Marktplatz Bildung. Umfeldanalyse und Konzeption Nachrichten fur ¨ Dokumentation, vol. 50, no. 1, pp. 13– 23, 1999.
[2] Homo technikus geht online Password, no. 1, p. 16, 1997. [3] T. W. Malone, J. Yates, and R. I. Benjamin, Electronic markets and electronic hierarchies Communications of the ACM, vol. 30, no. 6, pp. 484–497, 1987. [4] T. Malone, J. Yates, and R. Benjamin, The logic of electronic markets Harvard Business Review, no. May-June, pp. 166–170, 1989. [5] J. Yannis-Bakos, Information links and electronic marketplaces: the role of interorganizational information systems in vertical markets Journal of Management Information Systems, vol. 8, no. 2, pp. 31–52, 1991.
Wirtschaftsinformatik, vol. 35, no. 5, [6] B. Schmid, Elektronische Markte ¨ pp. 465–480, 1993. [7] R. Kuhlen, Globale, regionale elektronische Marktplatze. ¨ Forum und Markt in Herausforderungen an die Informationswirtschaft. Informationsverdichtung, Informationsbewertung und Datenvisualisierung (J. Krause, M. Herfurth, and J. Marx, ¨ eds.), Schriften zur Informationswissenschaft 27, pp. 313–322, Universitatsverlag Konstanz: Konstanz, 1996. Proceedings des 5. Internationalen Symposiums fur ¨ Informationswissenschaft (ISI ’96). [8] M. Rittberger and W. Rittberger, Measuring quality in the production of databases Journal of Information Science, vol. 32, no. 1, pp. 25–37, 1997. [9] D. Garvin, Managing quality. The strategic and competitive edge. Free Press: New York, 7 ed., 1988. [10] International Organization for Standardization, Geneve, ISO-8402. Quality management and quality assurance - vocabulary, 2 ed., 1994. [11] R. Basch, Decision points for databases Database, no. August, pp. 46–50, 1992. [12] S. Arnold, Information manufacturing: the road to database quality Database, no. October, pp. 32–39, 1992. [13] M. Eppler, Informative Action. PhD thesis, Geneva: University of Geneva, 1998. Chapter: quality information standards. [14] C. Fox, A. Levitin, and T. Redman, Data and data quality in Encyclopedia of Library and Information Science, vol. 57, pp. 110–122, Marcel Dekker: New York, 1996. Supplement 20. [15] J. Klobas, Beyond information quality: fitness for purpose and electronic information resource use Journal for Information Science, vol. 21, no. 2, pp. 95– 114, 1995. [16] R. Taylor, Value-added processes in information systems. Ablex: Norwood, NJ, 1986. [17] C. Barry and L. Schamber, Users’ criteria for relevance evaluation: a crosssituational comparison Information Processing & Management, vol. 34, no. 2-3, pp. 219–236, 1998. [18] S. Rieh and N. Belkin, Understanding judgement of information quality and cognitive authority in WWW in ASIS’98 Information access in the global information economy. Proceedings of the 61st ASIS Annual Meeting of the American Society of Information Science (C. Preston, ed.), pp. 279–289, Information Today: Medford, NJ, 1998.
[19] S. Harter and C. Hert, Evaluation of information retrieval systems: approaches, issues, and methods Annual Review of Information Science, vol. 32, pp. 3–94, 1997. [20] J. Tague-Sutcliffe, Measuring information. An information services perspective. Academic Press: San Diego, London, 1995. ¨ [21] R. Kuhlen, Elektronische Markte in der Informationsgesellschaft oder: die Informationsgesellschaft als elektronischer Markt in Perspektiven multimedialer Kommunikation (U. Glowalla and E. Schoop, eds.), pp. 43–48, Springer: Berlin, 1996. Deutscher Multimedia Kongreß’96. [22] D.-M. Lincke, Evaluating integrated electronic commerce systems Electronic Markets, vol. 8, no. 1, pp. 7–11, 1998. [23] J. Bertin, Graphics and graphic information-processing. De Gruyter: Berlin, 1981. [24] B. Shneiderman, Designing the user interface. Strategies for effective human-computer interaction. Addison-Wesley: Reading, MA, 3. ed., 1998. [25] Sun On The Net. Guide to WebStyle 1996. http://www.sun.com/styleguide/. [26] UCLA-Web Styleguides 1996. http://www.ucla.edu/infoucla/styleguidelines/. [27] Yale C/AIM Web Style Guide 1997. http://info.med.yale.edu/caim/manual/.
Offene Hypertextsysteme. Das Konstanzer Hypertextsys[28] R. Hammwohner, ¨ tem (KHS) im wissenschaftlichen Kontext, vol. 32 of Schriften zur Informationswissenschaft. Universitatsverlag ¨ Konstanz: Konstanz, 1997. Habilitationsschrift. [29] H. V. D. Parunak, Hypermedia topologies and user navigation in Proceedings of the Hypertext ’89 conference. (Pittsburgh, PA, 5.-8. Nov.), pp. 43–50, New York: ACM, 1989. [30] K. Utting and N. Yankelovich, Context and orientation in hypermedia networks ACM Transactions on Information Systems, vol. 7, no. 1, pp. 58–84, 1989. [31] S. Chakrabarti, B. Dom, P. Raghavan, S. Rajagopalan, D. Gibson, and J. Kleinberg., Automatic resource compilation by analyzing hyperlink structure and associated text Computer Networks and ISDN Systems, vol. 30, no. 1-7, pp. 65–74, 1998. [32] L. Gravano, K. Chang, H. Garcia-Molina, C. Lagoze, and A. Paepcke, STARTS: Stanford Protocol Proposal for Internet Retrieval and Search 1998. http://www-db.stanford.edu/ gravano/starts.html. [33] J. Janes and L. Rosenfeld, Networked information retrieval and organization: issues and questions Journal of the American Society for Information Science, vol. 49, no. 9, pp. 711–715, 1996. [34] C. Lynch, Networked information resource discovery: an overview of current issues IEEE Journal on Selected Areas in Communications, vol. 13, no. 8, pp. 1505–1522, 1995. [35] J. Powell and E. Fox, Multilingual Federated Searching D-Lib Magazine, September Heterogeneous Collections http://www.dlib.org/dlib/september98/powell/09powell.html.
Across 1998.
[36] K. Bharat and A. Broder, A technique for measuring the relative size and overlap of public Web search engines. Computer Networks and ISDN Systems, vol. 30, no. 1-7, pp. 379–388, 1998. [37] M. Jeusfeld and M. Jarke, Suchhilfen fur ¨ das WorldWide Web: Funktionsweisen und Metadatenstrukturen Wirtschaftsinformatik, vol. 39, no. 5, pp. 491– 499, 1997. [38] S. Lawrence and C. Giles, Searching the World Wide Web Science, vol. 280, no. 3, pp. 98–100, 1998. [39] V. Petras and M. Bank, Vergleich der Suchmaschinen AltaVista und HotBot bezuglich Treffermengen und Aktualitat ¨ Nachrichten fur ¨ ¨ Dokumentation, vol. 49, no. 8, pp. 453–458, 1998. [40] C. Schwartz, Web Search Engines Journal of the American Society for Information Science, vol. 49, no. 11, pp. 973–982, 1998. [41] L. Su, H. Chen, and X. Dong, Evaluation of Web-based search engines from the end-user’s perspective. A pilot study. in Information Access in the Global Information Economy. ASIS’98. Proceedings of the 61st ASIS Annual Meeting (C. Preston, ed.), pp. 348–361, Information Today: Medford, NJ, 1998. [42] D. Sullivan, Search Engine Features Chart http://www.searchenginewatch.com/webmasters/features.html.
1999.
[43] F. Teuteberg, Effektives Suchen im World Wide Web: Suchdienste und Suchmethoden Wirtschaftsinformatik, vol. 39, no. 4, pp. 373–383, 1997. [44] B. Bekavac and M. Rittberger, Kontextsensitive Visualisierung von Suchergebnissen in Hypertext - Information Retrieval - Multimedia ’97: Theorien, Modelle und Implementierungen integrierter elektronischer Informationssysteme (N. Fuhr, G. Dittrich, and K. Tochtermann, eds.), no. 30 in Schriften zur Informationswissenschaft, pp. 307–321, Universitatsverlag ¨ Konstanz: Konstanz, 1997. [45] S. K. Card, Visualizing Retrieved Information: A Survey IEEE Computer Graphics and Applications, vol. 16, no. 2, pp. 63–70, 1996. ¨ [46] K. Cotoaga, W. Konig, S. Markwitz, and T. Rebel, Web-Benchmarking: Graphbasierte Wissenserschließung im Internet und Intranets Tech. Rep. Arbeitspapier 96-02, Universitat ¨ Frankfurt, Institut fur ¨ Wirtschaftsinformatik, 1996. [47] R. Daßler ¨ and A. Otto, Knowledge Browser. Ein VRML-Retrievalinterface fur ¨ Dokumentation. Zeitschrift fur ¨ Information¨ GEOLIS(GFZ) Nachrichten fur swissenschaft und -praxis, vol. 47, no. 3, pp. 151–158, 1996. [48] A. Dieberger, Browsing the WWW by interacting with a textural virtual environment - a framework for experimenting with navigational metaphors in Hypertext ’96: The seventh ACM conference on hypertext, pp. 170–179, The Association for Computing Machinery: New York, 1996. [49] S. G. Eick, Aspects of Network Visualization IEEE Computer Graphics and Applications, vol. 16, no. 2, pp. 69–72, 1996. [50] M. Hasan, A. Mendelzon, and D. Vista, Applying Database Visualization on the World Wide Web SIGMOD Record, vol. 25, no. 4, pp. 45–49, 1996. [51] U. Krohn, Visualization for Retrieval of Scientific and Technical Information. PhD thesis, Technische Universitat ¨ Clausthal, 1996.
[52] S. Mukherjea and J. Foley, Visualizing the World-Wide Web with the Navigational View Builder Computer Networks and ISDN Systems, vol. 27, no. 6, pp. 1075–1084, 1995. [53] L. Nowell, R. France, D. Hix, L. Heath, and E. Fox, Visualizing search results: Some alternatives to query-document similarity in SIGIR’96. Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Zurich, Switzerland (H. Frei, D. Harmann, ¨ ¨ P. Schauble, and R. Wilkinson, eds.), pp. 67–75, 1996. [54] R. M. Rohrer and E. Swing, Web-Based Information Visualization IEEE Computer Graphics and Applications, vol. 17, no. 4, pp. 52–59, 1997. [55] R. Botafogo, E. Rivlin, and B. Shneiderman, Structural analysis of hypertexts: Identifying hierarchies and useful metrics ACM Transactions on Information Systems, vol. 10, no. 2, pp. 142–179, 1992. ¨ [56] G. Mußler, Marketing und Qualitatsmanagement fur ¨ elektronische Fachinformationsmarktplatze ¨ mit Blick auf eine Zertifizierung Master’s thesis, Universitat ¨ Konstanz, Informationswissenschaft, September 1997. [57] N. Bevan, Usability issues in Web site design in Design of Computing Systems: Cognitive Considerations. Proceedings of the Seventh International Conference on Human-Computer Interaction (HCI International ’97) (G. Salvendy, M. Smith, and R. Koubek, eds.), pp. 803–806, Elsevier: Amsterdam, 1997. [58] M. Grauer and U. Merten, Multimedia : Entwurf, Entwicklung und Einsatz in betrieblichen Informationssystemen. Springer: Berlin, Heidelberg, 1997. [59] L. Hagen, Informationsqualitat ¨ von Nachrichten : Meßmethoden und ihre Anwendung auf die Dienste von Nachrichtenagenturen. No. 6 in Studien zur Kommunikationswissenschaft, Westdeutscher Verlag: Opladen, 1995. [60] P. Hofman and E. Worsfold, Specification for resource description methods Part 2: Selection Criteria for Quality Controlled Information Gateways 1997. http://www.ukoln.ac.uk/metadata/desire/quality/title-page.html. [61] J. Janik, Web Company des Jahres Business Online, no. 6, pp. 48–49, 1997. [62] R. Kuhlen, Elektronische regionale Markte ¨ als kooperative Netze in Informationsmangaement in der Informationsgesellschaft. Proceedings des 2. Konstanzer Informationswissenschaftlichen Kolloquiums (KIK’95) (P. Schieber, ed.), no. 23 in Schriften zur Informationswissenschaft, pp. 302–325, Konstanzer Universitatsverlag: ¨ Konstanz, 1995. [63] R. Kuhlen, Organisationsformen und Mehrwertleistungen elektronischer Markte ¨ in Linguistik und neue Medien. GLDV-Jahrestagung vom 17. - 19. Marz ¨ ¨ 1997 (G. Heyer and C. Wolff, eds.), Deutscher Universitats-Verlag: Wiesbaden, 1998. [64] J. Nielson, Top Ten mistakes http://www.useit.com/alertbox/9605.html.
in
Web
Design
LIS
1996.
[65] K. Oliver, G. Wilkinson, and L. Bennett, Evaluating the Quality of Internet Information Sources in The Annual Convention of the Association for the Advancement of Computing in Education (AACE), ED-MEDIA/ED-TELECOM 97, Calgary, AB, Canada, 1997. http://itech1.coe.uga.edu/Faculty/gwilkinson/AACE97.html.
[66] H. Tilman, Evaluating Quality on the net 1997. 12.12.98: http://www. tiac.net/users/hope/findqual.html. [67] E. Wallmuller, Ganzheitliches Qualitatsmanagement ¨ in der Informationsver¨ arbeitung. Carl Hanser: Munchen, 1995. ¨ ¨ [68] H.-U. Kupper, Controlling, Konzeption, Aufgaben und Instrumente. Schaffer¨ Poeschel: Stuttgart, 1997. [69] V. Zeithaml, A. Parasuraman, and L. Berry, Delivering quality service. The Free Press: New York, 1990. [70] K. Burk ¨ and D. Marek, Produktion von wissenschaftlich-technischen Datenbanken Handbuch der modernen Datenverarbeitung (HMD), vol. 25, no. 141, pp. 45–54, 1988. [71] L. Granick, Assuring the quality of information dissemination: responsibilities of database producers Information Services & Use, vol. 11, pp. 117–136, 1991. [72] B. Lawrence and T. Lenti, Application of TQM to the continuous improvement of database production in Electronic information delivery: Ensuring quality and value (R. Basch, ed.), ch. Part I Database Production, pp. 69–87, Gower Publishing Limited: Hampshire, 1995. ¨ von bibliographischen Datenbanken: Die Datenbank [73] W. Luck, Qualitat ¨ ¨ PHYS in 5. Osterreichisches Online-Informationstreffen in Seggauberg, 1993. [74] D. Marek, Integrated system support for the cooperative production of bibliographic, referral and numeric databases in 17th International Online Information Meeting 1993 (D. Raitt and B. Jeapes, eds.), pp. 347–357, Learned Information: Oxford, 1993. [75] T. Aitchison, Aspects of quality Information Services & Use, vol. 8, pp. 49– 61, 1988. [76] E. Beutler, Assuring Data Integrity and Quality: A Database Producer’s Perspective in Electronic information delivery: Ensuring quality and value (R. Basch, ed.), ch. Part I Database Production, pp. 59–68, Gower Publishing Limited: Hampshire, 1995. [77] E. O’Neill and D. Vizine-Goetz, Quality Control in online databases in Annual Review of Information Science and Technology (ARIST) (M. Williams, ed.), vol. 23, pp. 125–156, Elsevier: New York et al., 1988. [78] P. Townsend, Commit to quality. John Wiley & Sons: New York, 1986. [79] G. Wheeler, Securing product-service quality in large-scale bibliographic database production master thesis, University of Wales, 1988. [80] A. Gilchrist, Quality management in information services - a perspective ¨ von Informationsdiensten. 7. Internationale on European practice in Qualitat Fachkonferenz der Kommission Wirtschaftlichkeit der Information und Dokumentation e.V. in Zusammenarbeit mit der Gesellschaft fur ¨ Informatik e.V. GI und der International Federation for Information and Documentation FID. GarmischPartenkirchen, 2.-4. Mai 1993 (W. Schwuchow, ed.), pp. 92–99, 1993. [81] R. Juntunen, R. Ahlgren, J. Jalkanen, R. Hagelin, P. Helander, T. Koivulu, I. Kivela, ¨ E. Mickos, and A. Rautava, Quality requirements for databases - project
for evaluating Finnish databases in 15th International Online Information Meeting 1991, pp. 351–359, Learned Information: Oxford, England, 1991. [82] R. Fidel, Database design for information retrieval. A conceptual approach. Wiley: New York, 1987. [83] K. Medawar, Database quality: a literature review of the past and a plan for the future Program: electronic library and information systems, vol. 29, no. 3, pp. 275–272, 1995. [84] R. Basch, An overview of quality and value in information service in Electronic Information Delivery (R. Basch, ed.), ch. Introduction, pp. 1–10, Gower Publishing: England, 1995. [85] R. Fidel and D. Soergel, Factors affecting online bibliographic retrieval: a conceptual framework for research Journal of the American Society for Information Science, vol. 34, no. 13, pp. 163–180, 1983. [86] P. Jasco, Testing the Quality of CD-ROM Databases in Electronic information delivery: Ensuring quality and value (R. Basch, ed.), ch. Part III Quality Testing, pp. 141–168, Gower Publishing Limited: Hampshire, 1995. [87] A. Mintz, Quality control and the zen of database production Online, no. November, pp. 15–23, 1990. [88] B. Quint, Better Searching Through Better Searchers in Electronic information delivery: Ensuring quality and value (R. Basch, ed.), ch. Part II Role of The Search Intermediary, pp. 99–116, Gower Publishing Limited: Hampshire, 1995. [89] C. Tenopir, Database quality revisited Library Journal, no. 1, pp. 64–67, 1990. [90] C. Tenopir, Priorities of Quality in Electronic information delivery: Ensuring quality and value (R. Basch, ed.), ch. Part III Quality Testing, pp. 119–139, Gower Publishing Limited: Hampshire, 1995. [91] S. Webber, Criteria for comparing news databases in Online Information 92, 8-10 December 1992, London, England, pp. 537–546, Learned Information, Oxford, England, 1992.
Die Nachfrage und das Angebot von externen In[92] U. Weber-Schafer, ¨ formationen zu Unternehmensstrategien in einem Online-Informationssystem. Entscheidungsorientierte Analyse am Beispiel des europaischen ¨ Binnenmark¨ Hochschulschriften: tes, Anforderungen und Konzepte. No. 1660 in Europaische 5, Volks- und Betriebswirtschaft, Lang: Frankfurt am Main, 1995. [93] R. Basch, Measuring the quality of the data: report on the fourth annual SCOUG Retreat Database Searcher, no. October, pp. 18–23, 1990. [94] T. Wilson, “Equip: A european survey of quality criteria for the evaluation of databases: report on the questionary survey.” European quality management programme for the information sector, 1994. [95] P. Jasco, Content Evaluation of Databases in Annual Review of Information Science (M. Williams, ed.), vol. 32, ch. 5, pp. 231–267, Information Today: Medford, NJ, 1997. [96] International Atomic Energy Agency (IAEA), Vienna (Austria), INIS: Subject categories and scope descriptions, 1991. IAEA-INIS-3 (Rev.7).
[97] International Atomic Energy Agency (IAEA), Vienna (Austria), INIS: Instructions for submitting abstracts, 1988. IAEA-INIS-4 (Rev.2). [98] Fachinformationszentrum Karlsruhe, Manual for subject indexing, 1990. FIZKA-Serie 3-3, 160 pages. [99] Japan Information Center of Science and Technology (JICST), -English version- Vol.1. Tokyo, JICST-Thesaurus, 1987.
Leitfaden fur [100] C. Hitzeroth, D. Marek, and J. Muller, ¨ die Erfassung von ¨ Dokumenten in der Literaturdokumentation. Verlag Dokumentation: M¨unchen, 1976. [101] International Organization for Standardization, Geneve, ISO-3166. Codes for the representation of names of countries, 4th ed., 1993. [102] International Atomic Energy Agency (IAEA), Vienna (Austria), INIS: Terminology and codes for countries and international organizations, 1987. IAEAINIS-5 (Rev.6). [103] Fachinformationszentrum Karlsruhe, List of journals and serials publications, 1992. FIZ-KA-Serie 3-8, 240 pages. [104] M. Ockenfeld and E. Wetzel, Fachinformationsdatenbanken und Informationssysteme. Gesellschaft fur ¨ Mathematik und Datenverarbeitung (GMD), Inst. fur ¨ Integrierte Publikations- und Informationssysteme (IPSI), 1990. [105] A. Oßwald, Dokumentlieferung im Zeitalter Elektronischen Publizierens. Schriften zur Informationswissenschaft 5, Universitatsverlag ¨ Konstanz: Konstanz, 1992. [106] C. Best, B. Gravemann, A. Jacobs, and O. Ruczka, Erste Erfahrungen mit ¨ METAL ABI-Technik, vol. 13, no. 1, dem automatischen Ubersetzungssystem pp. 41–44, 1993. [107] P. Biebricher, N. Fuhr, G. Knorz, G. Lustig, and M. Schwantner, Entwicklung und Anwendung des automatischen Indexierungssystems AIR/PHYS Nachrichten fur ¨ Dokumentation, vol. 39, no. 3, pp. 135–143, 1988. [108] W. Luck, W. Rittberger, and M. Schwantner, Der Einsatz des Automa¨ tischen Indexierungs- und Retrieval-Systems AIR im Fachinformationszentrum Karlsruhe in Experimentelles und praktisches Information Retrieval. Festschrift fur ¨ Gerhard Lustig (R. Kuhlen, ed.), no. 3 in Schriften zur Informationswissenschaft, pp. 141–170, Universitatsverlag ¨ Konstanz: Konstanz, 1992. [109] “Lesesystem discover 7320. der durchbruch.” Sonderdruck: PC Magazin, 49, 1987. [110] H. Behrens, “Datenbanken und ihre produktion.” Foliensammlung zur ¨ Konstanz. Informationswissenschaft, 1994. Vorlesung im SS94. Universitat [111] International Atomic Energy Agency (IAEA), Vienna (Austria), INIS: Thesaurus, 1995. IAEA-INIS-13 (Rev.34). [112] M. Gorman and P. Winkler, eds., Anglo-American cataloguing rules. American Library Association: Chicago, 2 ed., 1988. [113] “Webster’s new geographical dictionary.” Merriam: Springfield, MA, 1972.
[114] J. Krause, Holistische Modellbildung als eine Antwort auf die Herausforderungen der Informationswirtschaft in Herausforderungen an die Informationswirtschaft. Informationsverdichtung, Informationsbewertung und Datenvisualisierung (J. Krause, M. Herfurth, and J. Marx, eds.), Schriften zur Informationswissenschaft 27, pp. 11–21, Universitatsverlag ¨ Konstanz: Konstanz, 1996.