Institute of Technical and Business Information Systems ... Thereafter, the framework of knowledge management system evaluation is described. Section 4 .... place throughout the company, so that knowledge creation is mostly not limited.
Towards an Evaluation Framework for Knowledge Management Systems Folker Folkens and Myra Spiliopoulou Institute of Technical and Business Information Systems Otto-von-Guericke University Magdeburg
Abstract. Companies are not adequately able to assess the contribution of system infrastructure managing the admittedly most valuable asset, their knowledge. This paper presents a framework for ex-post evaluation of Knowledge Management Systems (KMS). The framework explained focuses on evaluation of knowledge management components and their functionality within an organization, so that appropriate integration techniques can be designed for each of those. For this purpose, the framework outlines dependencies and links between important corporate knowledge intensive entities for the purpose of evaluation. All phases of knowledge management in organizations are described and divided into functions of different corporation specific relevance, based on acknowledged categorization authored in literature. Improvement of inadequate knowledge methods, by evaluation and, as a possible result of assessment, integration/ migration for more suitable methods of corporate knowledge management, defines the target of pigeonholing research to come.
Key words: Knowledge Management Systems, Evaluation, Model, Framework, Information Systems Success
1
Introduction
Knowledge is the most important asset for companies today. It has become the most momentous resource and competitive advantage. As Drucker [8] stated, traditional factors of production (land, natural resources, labor, capital and so forth) have become secondary. The intellectual factor rose as the traditional ones shrunk and that process is continuing. Thus, knowledge is a key factor to focus on, to keep, to acquire, to create, to externalize, to share and to use. KMS are to support these tasks, though not necessarily all of them likewise, as will be discussed. When it comes to evaluation in knowledge management, there are two scopes, Intellectual Capital (IC) and systems managing knowledge. While IC evaluation assesses non-physical or intangible assets of a company [17], KMS evaluation stresses measurement of contribution of systems managing knowledge. We focus on the latter. While firms spend many resources and investing in infrastructure to support knowledge management, the effect of efforts made has neither been evaluated nor proper improvements fitting the company have
been identified. Swaak and Lansink [25] argue that this is caused by insufficient instruments to measure and evaluate KMS. There are a lot of studies on metrics in place, as will be discussed in the next section. Those studies use a number of metrics, each adapted to the given environment and needs. Unfortunately, it has not been possible to generalize metrics, so that each of the studies uses different ones, though subsets may be similar. Gable and Sedera [10] analyzed several Enterprize System Success studies, concluding, too, that varying results are a consequence of incomplete or inappropriate measures as well as lack of theoretical grounding, myopic on financial indicators or weakness in survey instruments employed. Metrics have been employed for assessment of system success, ex-ante and ex-post likewise. Since the system infrastructure to be evaluated is already in place for evaluation and later integration/ migration in focus of this paper, ex-post evaluation is discussed here. Once lacks have been identified, those have to be corrected by new or improved methods. Other (probably non-technical) actions to take, i.e. organizational restructuring might be a choice but this is not of interest for now. The consequences of measurement results is another research issue to be discussed later. Focusing KMS, the target for evaluation and measurement has been set: What is the contribution of the KMS in use and does it reach its potential? To answer these questions, metrics for measurement have to be formulated. Those metrics have to be knowledge management specific, since a knowledge management system has to be assessed. As stated before, a lot of studies on evaluation of systems using metrics had been done. Although, the metrics in each of the studies were formulated for the specific purpose. In this paper a framework for KMS evaluation is formulated, that approaches to provide a structure for metrics classification. Hereunto, knowledge management is firstly categorized into knowledge functions to associate metrics to those. Section 3.1 outlines which granularity is considered to be best and describes those knowledge functions. The model manifested in section 3 provides a framework specifically designed for evaluation of systems managing knowledge and thus, metrics for KMS evaluation have to be addressed. For that, firstly related work of areas is examined in the following section 2, followed by an explanation of knowledge management functions in section 3, that will picture cumulative knowledge management divided into functions. Thereafter, the framework of knowledge management system evaluation is described. Section 4 finally concludes this paper.
2
Related Work on System Evaluation
Many researchers evaluate KMS from an IS (Information Systems) point of view. DeLone and McLean created the most cited and accepted evaluation model [5] [6] in IS evaluation research so far. They analyzed relevant literature, which decades of research came up with regarding evaluation models and concluded those into a multidimensional model for IS evaluation. This model has been widely accepted, served as a basis for further evaluation research and has been used for empirical studies [3]. Nonetheless, the model was not designed to fit
knowledge management purposes and it does therefore not consider knowledge management aspects. Maier and H¨adrich [16] approached to suit the DeLone and McLean model to the demands of knowledge management for KMS evaluation. For this, they changed model components and assigned success factors to their two kinds of KMS: interactive systems and integrative systems. Interactive systems support humans communicating tacit knowledge, while integrative systems focus on aiding externalization and internalization using a central knowledge base (i.e. repository). There are other valuable approaches existent for KMS evaluation. Those are the deductive approaches Tobin’s q [18] and Calculated Intangible Value [22] as well as the inductive analytical approaches Intellectual Capital Approach [22], Intangible Assets Monitor [24], Intellectual Capital Navigator [22], Balanced Scorecard [15] and Skandia Navigator [21], for which we agree with Maier and H¨adrich to be insufficiently operationalized to accomplish KMS assessment. There, Kankanhalli and Tan [14] as well as Maier and H¨adrich argue, that KMS success is evaluated at an abstract level that is influenced by an unmanageable and unstructured amount of factors. Furthermore Kankanhalli and Tan [14] provide a literature overview, where they analyzed measures and metrics for KMS evaluation. They found, that generalizable metrics are lacking, though they are desirable. Metrics are measures of key attributes that yield information about a phenomenon [23]. Metrics are well suited and even necessary to reveal coherences. In addition, several authors [2] [7] [13] argue that proper measurement requires adapted metrics. Probst and Romhardt [20] divided knowledge management into several modules/ functions. The widely acknowledged classification characterizes knowledge management in organizations by definition of functions/ tasks1 to accomplish. The classification model does not ground on a theoretical derivation. It was created in a practical environment, group discussions and interviews with practitioners. For this purpose, practice-orientation is intended to make KMS evaluation applicable. The Probst and Romhardt knowledge model has been widely accepted and will serve as a basis for the framework in discussion here. Furthermore, the classification encompasses a holistic concept of knowledge management and it serves as a basis for further research on systems for knowledge management. We take this categorization to build a frame for classification of factors for KMS evaluation. Thus, it will serve as a structure for existing metrics. The existing metrics are valuable, but extensive, not generalizable and not manageable because of the extent. This paper approaches to provide a structure for metrics towards KMS evaluation. As opposed to other evaluation approaches, like the one of Maier and H¨adrich [16], we seek to create an extendable framework for evaluation by integration of metrics. Furthermore, our frame bases on the Probst and Romhardt categorization of knowledge management functions [20]. We think that this classification represents corporate knowledge management best, as will be discussed in the next section. The framework is oriented on the corporate environment and tar1
The terms ”knowledge task” and ”knowledge function” have been used in several articles likewise and mean the same. Here, we use the terminus ”knowledge function”.
gets, that will be reflected by the metrics used. Furthermore, Iversen and Kautz note that Implementation of metrics needs very clear goals as well as ’right’ metrics [13]. That is, metrics have to be adapted to the corporate goal and the evaluation purpose as will be taken into account and it is important in our point of view and further motivates the framework.
3
Evaluation of Knowledge Management Systems
The purpose of KMS is to support corporate knowledge management. Firstly, this section describes what knowledge management consists of and how to separate it into functions. Afterwards a framework is depicted, which bases on the functions given. This new approach assesses knowledge management functions and with this, the accomplishment of those by KMS. 3.1
Knowledge Management Functions
Focusing the different areas of knowledge management, some approaches to distinguish those will be encountered. These areas supported by knowledge management functions differ mostly in granularity. While Davenport and Prusak [4] identify three tasks of knowledge management, generation, codification/ coordination and transfer, Probst [20] distinguishes eight tasks. Those tasks are functions of knowledge management to support the company. For the purpose of system evaluation, the knowledge management separation of Probst is the most appropriate one. The differentiating classification ensues a target oriented management. It structures the management process in logical tasks and provides clues for intervention. That can not be ensured using the grosser granularity of Davenport and Prusak for instance. Last but not least, the Probst separation is a proven mechanism to search for reasons for knowledge problems [20] [19] [12]. Therefore, this classification is appropriate to underlie the presented evaluation approach of this paper. The importance of the functions given by Probst [20] [19] [12] differs significantly from one company to the next. Mertins and Heising [17] exposed significant differences regarding Knowledge Management Functions in organizations in their survey. Their findings2 are embodied in Table 1. Through this, the distinction between the importance of knowledge management functions between diverse organizations is highlighted. Thus, organizations do have different demands on knowledge management functions, so that no generic evaluation method, that does not include those differences, is appropriate. 2
Knowledge Management Functions have been adapted to a unique terminology regarding publications on that field/ i.e. Mertins and Heising [17] refer to ’apply knowledge’, while Probst and Romhardt [20] as well as Heinrich [12] call this ’knowledge distribution’ and finally Alavi and Leidner [1] term it ’knowledge transfer’. The same slight difference appears with ’knowledge goals’ and ’knowledge targets’ or ’store knowledge’ and ’knowledge preservation’ etc.
Knowledge Management Functions Definition of Knowledge Targets Identification Utilization Creation Distribution Preservation
Important 48% 65% 96% 84% 91% 78%
Medium 32% 24% 3% 8% 7% 16%
Less Important 20% 11% 1% 8% 2% 6%
Table 1. Importance of Knowledge Management Functions within various organizations. A 5-point Likert-scale has been used in the survey. The table aggregates the results of Mertins and Heising [17].
Recapitulating, knowledge management can be broken down into parts or functions to be supported [17] [12] [20] [19] [9]: knowledge identification, knowledge acquisition, knowledge creation, knowledge distribution, knowledge utilization and knowledge preservation. Knowledge evaluation and definition of knowledge targets are superordinate for evaluation of roles for knowledge functions in an organization. The definitions given below combine literature cited above. Although methods to support those functions have not yet been discussed in literature, we provide a first classification below. Additionally, a categorization is given, stating what the importance of each function usually depends on. Knowledge Identification supports to make knowledge visible. The bigger the company the more difficult it is to identify existing knowledge. The lacking transparency causes inefficiencies, ”uninformed” decisions/ actions and redundancies. Effective knowledge management must accomplish transparency and support humans searching for knowledge. The importance of knowledge identification in a company depends on company objectives, infrastructure and company culture. Ontologies, Knowledge Maps, Search Engines and information retrieval techniques in general are examples of appropriate methods to implement Identification. Knowledge Acquisition supports to obtain knowledge. No company is able to produce all needed knowledge itself. Know-how, which a company can not develop itself, has to be acquired. That can be accomplished by acquiring innovative companies, recruiting experts, buying documents from outside sources, hire consultants, buy patents and so forth. Relationships with customers, suppliers, competitors and partners do also serve potential external sources for knowledge. KMS may point to external sources, if the desired knowledge is not available within the company. The importance of knowledge Acquisition depends on company culture and objectives. Methods to implement Acquisition are i.e. data bases containing indices of external sources potentially valuable for the company. Knowledge Creation supports to generate knowledge. Development of new knowledge in an organization focuses on creating new products, better ideas, more
efficient processes or new skills. This cannot be bought on the market, it does not fit needs or is basically too expensive, so that development is desirable. Non-existent knowledge is usually generated in the research department. Nevertheless, creativity to develop new ideas or not yet present capabilities takes place throughout the company, so that knowledge creation is mostly not limited to just one (research) department. Furthermore, expert systems might help to develop, visualize or combine knowledge, so that new conclusions can be drawn and new knowledge can be created. It is essential that created knowledge will be preserved in any kind of ”lessons learned” in a KMS, so that it will be available whenever needed. Importance of knowledge Creation depends on company culture, company objective and innovation/ research efforts. Methods to implement Creation are all kinds of data mining and learning tools [1]. Knowledge Distribution supports to share knowledge. Knowledge has to be made available throughout the company. Usage of knowledge requires availability. That is spreading and sharing know-how which is already present within the organization. This function goes hand-in-hand with knowledge identification, which supports spread of knowledge. The importance of knowledge distribution depends on the size of the company, infrastructure, company objective, culture and velocity of company regarding knowledge [4]. Suitable methods to implement Distribution are knowledge directories/ maps, discussion forums and electronic bulletin boards as well as semantic annotation of documents and communication technology in general. Knowledge Utilization supports to apply knowledge. Simple availability does not guarantee that present knowledge is indeed used. Knowledge Identification and Distribution is a precondition to successfully apply knowledge. This still does not ensure utilization, but the chance of usage of highly available and distributed knowledge does increase. Knowledge has to be trusted before routines or practices are changed due to better procedures for example. Ensuring the usage of KMS might positively influence applying new knowledge and prevent sticking to ”unchangeable” old habits3 . Furthermore, utilization means to assist knowledge workers to apply implemented knowledge. The importance of knowledge utilization depends on the complexity of problems, company culture, trust of knowledge sources and company infrastructure. Furthermore, the design of system interfaces may greatly influence Utilization [26]. Possible Implementation of Utilization is associated with expert systems, decision support systems and well designed user interfaces. Knowledge Preservation supports to store knowledge. Companies often lose competencies due to reorganization processes or simply time. Knowledge preservation 3
This does not mean to force people to use a KMS, but to design the KMS in a way that it indeed serves the desired purpose and constitutes an advantage for people using it/ applying the provided knowledge.
aims on retention of knowledge assets. Potentially valuable future knowledge has to be selected, structured, updated, made available and stored for time to come. This has to be accomplished by efficient storage media as well as KMS to access knowledge, to prevent valuable expertise to disappear. Selection of potentially valuable knowledge is of importance, since a huge amount of data, information/ knowledge stored will eventually lack trust of people, if they do not find proper knowledge in a huge database4 . The importance of Preservation depends on the viscosity of knowledge to store [4], amount of knowledge accruing, company objective, infrastructure and culture. Appropriate Methods to implement Preservation are efficient information retrieval techniques, organizational memory to manage experiences made in the past for future utilization as well as data bases and data warehouses. Definition of Knowledge Targets defines knowledge importance roles. Concrete objectives deviate from roles, so that the definition of targets is superordinate. Those roles define the importance of any knowledge management function for the company. Thus, definition of targets goes hand in hand with evaluation. Skills and technology to be developed and integrated have to be identified beforehand through an appropriate evaluation. Although evaluation does include measurement of defined targets as well. Targets will be a result of the mutual evaluation procedure and company objectives. Knowledge Evaluation assesses knowledge. Evaluation of roles and through this, the possible redefinition of importance roles may be a result, as stated above. Furthermore, the current concrete company objectives regarding knowledge management have to be evaluated as compared to technology supporting these objectives. 3.2
The Evaluation Framework
A significant part of this paper is dedicated to specification and exposure of the concept ’importance’ and its linkage to knowledge functions. It has been made clear that such strategy is necessary for the evaluation of KMS. The framework developed by the authors of this paper shows a holistic view of knowledge management system evaluation. It is a consolidation of the work of several cited authors as well as a concluding idea which bears on publications cited in section 2. The framework of knowledge management evaluation is based on the knowledge functions given in the previous section a holistic knowledge frame was created, which pictures knowledge management within a company. This will help to understand knowledge management within companies to develop methodologies for proper assessment of KMS in use. It applies to various fields like metrics definition or evaluation of corporate knowledge working efficiency. The semantics of the Unified Modeling Language (UML) is used to 4
This is often caused by bad precision and recall rates
describe the model. UML, as the most widely accepted modeling semantic is well suited to clearly express dependencies and links between model components. Furthermore, the semantic is popularly understood, which makes it easier for the message to come across. Let us begin with what the previous section came up with, knowledge functions. Companies do have different demands towards knowledge functions. A consultant company has another need for knowledge than a freight forwarder, for instance. The demanding for knowledge bases on the functions given before in this section. Concluding the findings of section 3.1, companies do have different demands, the ”importance roles”. An ”importance role” expresses the specific demand for one of the knowledge functions of a company. For example, a company may have a higher demand for methods that implement the knowledge function ”Identification”. In table 1 of the previous section such differences of knowledge functions are outlined and that the demand for functions differs significantly. Knowledge Functions are implemented by knowledge methods as Figure 1 depicts. This means, knowledge methods are dedicated to functions and realize those. Some examples are given in section 3.1. All methods supporting one function summarized express the contribution to a function. A method can be dedicated to one or more functions and vice versa.
Knowledge Function n n
implements
Knowledge Methods n
assesses
Metrics
0..n
supports
interacts
Importance Role has
Targets
defines
1
0..n
starts
Company 1
Project 0..n
Fig. 1. Creating a context for Evaluation of KMS: Evaluation assesses Knowledge Functions and Importance Roles and knowledge methods to measure the real contribution of KMS
Companies are different in their objectives, which is reflected in their targets. Targets have to be taken into account when selecting the importance of the knowledge functions. The result of this deliberation results into the importance role construct. To operationalize this, we take the target and map it to knowledge functions. Is of interest which knowledge function is important and which is less important to support accomplishment of the company target regarding knowledge management. Then chose appropriate methods that are dedicated to the functions, to support the needed functions. This procedure will be guided through by usage of metrics. Metrics are needed to assess knowledge methods which do implement functions including their importance represented by the importance roles. Functions and importance roles then reflect the targets of the company. There are two evaluation procedures implied. The importance role has
to be set for each knowledge function by evaluation. Furthermore, assessment of knowledge methods represents the current contribution of the methods in use and with this, the KMS. Nonetheless, a concrete definition of metrics is not part of the framework so far. Further research will complete a more and more operational view for which we provide a first frame. For the frame and the interaction itself, it is less important to qualify the metrics object in detail. Operationalizing the framework, it becomes much more important. In the end, evaluation is all about metrics and measurement. At this stage of abstraction, metrics do not have to be defined yet. Further work will have to accomplish completing the puzzle. Nonetheless, the metrics context will be defined here. There are three specializations that have to be taken into account, Relevance, Quality and Availability that have to be measured for each knowledge method. Relevance specifies the pertinence of the knowledge method, Quality describes how good the method indeed fulfills the requirements and Availability assesses if the method is on-hand for people who need it. Each of those specialized metrics are dedicated to a set of metrics to assess methods. The selected metrics take into account, that the relevance of the metrics is directly influenced by the importance role of the dedicated knowledge function. That implicates a direct relationship of targets and metrics used to express the corporate knowledge goal. With this, the actual support of methods towards knowledge functions has to be evaluated to discover the real contribution of methods in use.
Metrics
Relevance
Quality
n
describes
1..n
KMS Quality
Availability
Fig. 2. Metrics for Evaluation of KMS.
Methods serve the purpose, in the end, to support knowledge intensive projects and thus, the company which performs the projects. The project construct represents all kinds of knowledge tasks people or groups of people have to accomplish to succeed with their (knowledge intensive) assignment. That implicates for metrics definition, that measurement has to assess the support and accomplishment of projects/ support of humans regarding their assignment. Measures regarding the KMS methods therefore include time savings, decision support or goal achievement for instance. Focusing the core message of the framework in figure 3, there are two kinds of evaluation: The importance roles of knowledge functions, specific to the company, which have to be revalued and comparison of actual and desired contribution of methods. Knowledge importance roles of a company have to be discov-
Utilization
Acquisition
Identification
Creation
Method 1
Method 2
Knowledge Function n
implements
Preservation
Method 3
Knowledge Methods n
Distribution
Method n
assesses
Metrics
n
describes
1..n
KMS Quality
0..n
n
supports
interacts
influences
Importance Role
Relevance
Quality
Availability
has Targets
defines
1
0..n
starts
Company 1
Project 0..n
Fig. 3. The UML Evaluation Model for KMS describes links and dependencies of relevant components for system evaluation.
ered, so that appropriate methods can be applied to support functions properly. Non-observance of knowledge roles may lead to overexpansion5 , wastefulness6 or squandering7 . The framework therefore implicates, that each knowledge function is implemented by methods. To assess these methods and its contribution to the company, knowledge roles have to be revalued first. Determination of roles is likely to be the most difficult part. To accomplish this, company targets have to be reviewed and scrutinized over again to extract its relevance towards knowledge and system infrastructure. This is an individual valuation, that reconsiders company aspects like environment, culture, company targets, orientation and so forth and turns it into important aspects of knowledge management. The conversion extracts knowledge intensive implications which then represent the entity ’importance role’. Based on the defined roles, desired (potential) contribution of functions has to be compared to the actual contribution. Measurement and thus comparison has to be accomplished by using metrics. A set of metrics measures the efficiency and effectiveness of each function by assessing KMS methods implementing those. With this, the actual contribution of methods is assessed and compared to the desired/ potential contribution: to meet requirements of importance roles of knowledge functions. These constructs are essential for evaluation of KMS. The measurement of the constructs on which KMS quality depends, can be accomplished by using metrics. Currently, scholarship of metrics and generalizable measurement techniques in the field of KMS evaluation are lacking [17] [14] [16]. Therefore, more research on that field is needed and will follow, to apply the framework of KMS evaluation presented here. 5 6 7
low efficiency and low effectivity [11] low efficiency and high effectivity[11] high efficiency and low effectivity [11]
4
Conclusion and Outlook
Systematic evaluation of benefits of KMS still falls behind. Users content themselves with success stories or surveys of satisfaction with KMS [16]. Despite multiple success stories it was not possible to sufficiently measure the success of KMS [25]. The framework presented in this paper outlines a first step towards holistic KMS evaluation. Therewith, this frame depicts a meta level, while metrics and measurement techniques have to be developed and applied with the framework. Furthermore, improvements of knowledge functions through methods enable maximum utilization, if evaluation concludes that there is an unused potential existent in the company, which would end up in a benefit. However, to avoid a competitive disadvantage, the required methods will have to be integrated. That can be accomplished through a possible migration of the KMS towards potential knowledge functions. Redevelopment or purchase of a suitable system might also be desirable. What to do best after a successful evaluation is dependent on the evaluation result, the constraints8 as well as on the methods used. This paper draws a holistic picture of KMS evaluation and bespeaks links and dependencies in a frame. Although, further research is required to fill the frame. With this framework coming research pigeonholes to finally provide holistic methodology of KMS evaluation. Firstly, metrics will have to be specified to make the framework applicable. Secondly, an overview of knowledge methods available will help assigning methods to knowledge functions to be supported within the company. Thirdly, migration strategies will have to be discussed for integration of required knowledge methods. These issues characterize research requirements to come for holistic evaluation of KMS.
References 1. M. Alavi and D.E. Leidner. Review: Knowledge Management and Knowledge Management Systems: Conceptual Foundations and Research Issues. MIS Quarterly, Vol. 25, No. 1, 107-136., 2001. 2. J. Ballantine, M. Bonner, M. Levy, A. Martin, I. Munro, and P.L. Powell. Common Framework for the Evaluation Process of KBS and Conventional Software. Knowledge-Based Systems Journal, Vol. 11, No. 2, 145-160., 1998. 3. J. Ballantine, M. Bonner, M. Levy, A. Martin, I. Munro, and P.L. Powell. The 3-D Model of Information Systems Success: the Search for the Dependent Variable Continues. Information Resources Management Journal, Vol. 9, No. 4, 5-14., 2002. 4. T. H. Davenport and L. Prusak. Working Knowledge - How Organisations manage what they know. Havard Business School Press, 1998. 5. W.H. Delone and E.R. McLean. Information Systems Success: The Quest for the Dependent Variable. Information Systems Research 3, 1 , 60-95., 1992. 6. W.H. Delone and E.R. McLean. Information Systems Success: Revisited. In Proceedings of the 35th Annual Hawaii International Conference on System Sciences, Volume 8, 238, 2002. 8
i.e. corporate environment, company culture, estimated return on investment and so forth
7. Chief Information Officer Department of the Navy. Metrics Guide for Knowledge Management Initiatives, 2001. 8. Peter Drucker. Post-Capitalist Society. Butterworth-Heinemann, Oxford United Kingdom, 1993. 9. R. Franken and R. Gadatsch. Integriertes Knowledge Management. Konzepte, Methoden, Instrumente, Fallbeispiele. Vieweg, Wiesbaden, 2002. 10. Guy G. Gable, Darshana Sedera, and Taizan Chan. Enterprise Systems Success: A Measurement Model. In Proceedings of the International Conference on Information Systems 2003, 2003. 11. L. J. Heinrich, I. H¨ antschel, and G. Pomberger. Diagnose der Informationsverarbeitung. CONTROLLING, Vol. 3, 196-203., 1997. 12. Lutz J. Heinrich. Informationsmanagement. Oldenbourg Verlag M¨ unchen, 2002. 13. J. Iversen and K. Kautz. The Challenge of Metrics Implementation. In Proceedings of the 23rd Information Systems Research Seminar in Scandinavia, 2000. 14. A. Kankanhalli and B. C.Y. Tan. A Review of Metrics for Knowledge Management Systems and Knowledge Management Initiatives. In Proceedings of the 37th Hawaii International Conference on System Sciences, 2004. 15. Robert S. Kaplan and David P. Norton. The Balanced Scorecard: Translating Strategy into Action. Havard Business School Press, Boston, 1996. 16. R. Maier and T. H¨ adrich. Ein Modell f¨ ur die Erfolgsmessung von Wissensmanagementsystemen. WIRTSCHAFTSINFORMATIK, Vol. 43, No. 5, 497-508., 2001. 17. K. Mertins, P. Heisig, and J. Vorbeck. Knowledge Management: Concepts and Best Practices. Springer -Verlag, Berlin Heidelberg, 2003. 18. K. North, G. Probst, and K. Romhardt. Wissen messen - Ans¨ atze, Erfahrungen und kritische Fragen. zfo - Zeitschrift f¨ ur F¨ uhrung und Organization, Vol. 67, 158-166., 1998. 19. G. Probst, S. Raub, and K. Romhardt. Managing Knowledge. Springer -Verlag, Berlin Heidelberg, 1999. 20. G. Probst and K. Romhardt. Bausteine des Wissensmanagements - ein praxisorientierter Ansatz. Handbuch Lernende Organisation, Gabler, Wiesbaden, 129-144., 1997. 21. D.J. Skyrme and D.M. Amidon. New Measures of Success. Journal of Business Strategy, Vol. 40, 20-24., 1998. 22. Thomas Stewart. Intellectual Capital: The New Wealth of Organizations. New York, Currency/Doubleday, 1997. 23. D. W. Straub, D. L. Hoffman, B.W. Weber, and C. Steinfield. Measuring eCommerce in Net-Enabled Organizations: An Introduction to the Special Issue. Information Systems Research Vol 13, No. 2, 115-124., 2002. 24. Karl Erik Sveiby. The New Organizational Wealth : Managing and Measuring Knowledge-Based Assets. Berrett-Koehler, San Fransisco, 1997. 25. J. Swaak, A. Lansink, E. Heeren, B. Hendriks, P. Kalff, J.-W. den Oudsten, R. B¨ ohmer, R. Bakker, and C. Verwijs. Measuring knowledge management investments and results; two business cases. In Proceedings of the 59th AEPF Conference, Bremen, 2000. 26. Ivo Wessel. GUI Design. Carl Hanser Verlag, M¨ unchen Wien, 2002.