The current issue and full text archive of this journal is available at www.emeraldinsight.com/1469-1930.htm
JIC 11,1
Visualising and measuring intellectual capital in capital markets: a research method
4
Laurence Lock Lee Optimice Pty Ltd, Belmont, Australia, and
James Guthrie The University of Sydney, Sydney, Australia Abstract Purpose – The purpose of this paper is to describe a research method successfully used to study intellectual capital (IC) and IC flows through a highly networked marketplace. Design/methodology/approach – The method integrates computer-assisted content analysis (CA) and multivariate statistics. The CA is performed on a large source of business and analyst reports. The method is successful in enabling the elements of IC to be related to firm performance, using 156 firms in the global information technology market as a testing ground. Findings – Computer-assisted CA techniques could be successfully used to analyse the larger samples of firms for IC attributes like human capital, internal capital and external or relational capital, than have previously been feasible using manual CA methods. Research limitations/implications – Several limitations of the method are identified and relate to the computer-assisted CA method used. First, the method relies on the existence of a large body of content, in this case business reports and articles, to create the indices for the IC attributes. A second limitation is that the IC attributes are constructed from public sources (i.e. they represent the view of external reporters, rather than internal to the organisation reporters). The method presented combines and extends the benefits of both qualitative and quantitative methods. The richer source of IC content for a larger sample of firms is made accessible through computer-assisted CA. The overall method enables insights to be explored in relating firm IC to firm performance in the market place. Originality/value – The integrated research method presented is the result of original research. The value to researchers is the opportunity it provides to study the IC/performance relationship across markets, rather than be limited to small sample or limited attribute studies. Keywords Communication technologies, Globalization, Economic sectors, Intellectual capital, Capital markets Paper type Research paper
1. Introduction A stream of intellectual capital (IC) research has been focused on IC disclosure (ICD) of firms operating in capital markets (Chauvin and Hirschey, 1993; Johnson et al., 2002; Lev and Sougiannis, 1999). The rationale is that if firms are asked to report on their IC in the same standardised, auditable forms as facilitated by traditional accounting reports, then current asymmetries in information provision to capital markets can be Journal of Intellectual Capital Vol. 11 No. 1, 2010 pp. 4-22 q Emerald Group Publishing Limited 1469-1930 DOI 10.1108/14691931011013307
The authors would like to thank Fiona Crawford for her editorial assistance. They would also like to thank the participants at the 3rd Workshop on Visualising, Measuring and Managing Intangibles and Intellectual Capital, Ferrara, Italy, 29-31 October 2007 and others who provided comments on this paper, including the reviewers for the Journal of Intellectual Capital.
limited and markets will therefore be less susceptible to unfair trading practices (Bukh et al., 2001; Holland, 2006; Pakhus, 2000). The growing gap between market and book values has provided much of the impetus for this type of IC research. The so-called “intangible asset” gap (Stewart, 1997) has provided the opportunity for fund managers and stock brokers to exercise their private insights and expertise to their own advantage (Abhayawansa and Abeysekera, 2009). For these market actors, any change in the status quo in terms of greater disclosure would only be detrimental (Holland, 2001, 2003, 2006; Johanson, 2003). While the admirable objectives of current ICD research are acknowledged, there is still much debate as to whether these objectives can be met (Abeysekera, 2006; Chatzkel, 2004). Ongoing definitional issues and varied measurement methodologies have brought into question the validity of early country level ICD studies (Abeysekera, 2006). While this situation will improve as ICD studies continue to mature, the issue remains as to how investors in capital markets can best access the information needed to make informed decisions. For company owners, leveraging IC to maximise share market value is a prime measure of successful wealth creation. Several studies have demonstrated the implicit use of IC factors in fund manager decision making (Ghosh and Wu, 2007; Holland, 1996; Johanson, 2003), though there is little evidence of market analysts making explicit use of IC reports (Johanson, 2003; Holland and Johanson, 2003; Abhayawansa and Abeysekera, 2009). The limited number of firms providing IC reports as a proportion of all firms that an analyst is covering could be one reason why analysts have not embraced IC reports. The current adoption level for IC reporting (ICR) is small enough to be considered irrelevant. The challenge is, therefore, to provide IC information across the majority of firms attracting analyst coverage. To date this has not been possible using current research methods. IC reports have either been developed as a self-reporting voluntary initiative (Edvinsson, 1997) or third party IC assessments have been conducted through content analysis (CA) of annual reports or other reporting media (Guthrie and Petty, 2000; Guthrie et al., 2004; Guthrie and Ricceri, 2009). Another alternative treatment of IC which has been applied across whole market sectors relies on the relatively narrow IC proxies of research and development (R&D) and advertising expenditure that is captured for certain market sectors within the Compustat data base (Chauvin and Hirschey, 1993; Johnson et al., 2002; Lev and Sougiannis, 1999). These studies are collectively referred to here as “intangible asset measurement method”. While these studies have been successful in relating the IC proxy measures to firm performance measures, their effectiveness is limited by the extent to which these proxy measures are representative of IC, and the market sectors for which R&D and advertising is significant. The objective of this paper is to provide details of a research method which addresses the shortcomings of current IC measurement formulations for capital markets. The method is based on the use of a computer-assisted CA method applied to business press articles and reports. The method is empirically demonstrated through its application to the global information technology (IT) sector. The rest of the paper is structured as follows. Section 2 provides a review of the IC measurement for capital markets literature. Section 3 follows, which outlines the research method designed to address current shortcomings. In Section 4, the application of the method to the global IT market sector is described. The application demonstrates
Visualising and measuring IC
5
JIC 11,1
6
how the method facilitates quantitative analysis of the impact of IC on firm performance. Finally, in Section 5, a discussion of the current strengths and weaknesses of the method are provided, together with recommendations for its broader development and application. 2. Measuring IC in capital markets Of keen interest to the accounting research community is the diminishing value relevance of financial accounting (Beaver, 2002). On market efficiencies, researchers have been interested in whether market-to-book ratios are a measure of market inefficiency (Lev, 2001; Smithers and Wright, 2000). For example, firms with high market-to-book ratios could be viewed as overpriced. However, the “market-to-book gap” cannot be fully explained in terms of traditional accounting measures like earnings and even forecasted earnings levels (Hand and Lev, 2003). Others argue that contemporary accounting practice should be maintained and that intangible assets should not be recognised on the balance sheet or that IC is an accounting practice (Walker, 2009). The loss of value relevance of financial accounting measures is based on the claim that the usefulness of earnings, cash flows and book values to predicting total shareholder returns has diminished over the past 25 years (Lev and Sougiannis, 1999; Ball, 1992). The inference from this claim is that financial accounting reports are missing important information that could better inform stakeholders of potential share market returns. Initial studies used linear regression techniques to demonstrate a reduced level of share price variations that could be explained by traditional accounting measures like earnings, book values and cash flows (Lev and Zarowin, 1999). These findings have been challenged and extended from different directions from a methodological perspective (Collins et al., 1997). Also, increased volatility in the market, it has been argued, can bias simple regressions to over-emphasise the loss of relevance (Francis and Schipper, 1999). Francis and Schipper (1999) analysed firms from high volatility technology sectors and low volatility industrial sectors to find an increase in balance sheet and book value relevance, but continuing support for a decline in relevance of earnings information. Liu and Thomas (2000) demonstrated that value relevance can be enhanced by the inclusion of forecast earnings into regression equations. Analyst consensus on earnings forecasts and their accuracy has been found to be highly dependent on the level of intangibles a firm possesses. The higher the level of intangibles a firm possesses, the poorer the level of consensus and accuracy (Barron et al., 2002). While there have been various challenges to the detail of the “loss of relevance” of financial accounting measures in predicting share values, the general tenet of these studies is that the loss of relevance is most particular to earnings reports. Researchers trying to explain the gap between market and book values have been calling for higher levels of disclosure on known intangibles. Common intangibles, like R&D and advertising, have been shown to have a strong correlation with share price in certain industries (Chauvin and Hirschey, 1993; Johnson et al., 2002; Lev and Sougiannis, 1999). An intriguing study of intangible assets effects on share prices in the pre-SEC era, when regulations were less strict on the capitalisation of intangibles, found no evidence that increased capitalisation of intangibles impacted share prices. In fact, investors inferred that by increasing the capitalisation of intangibles, the firms were overstating their earnings, resulting in a loss of relevance of earnings statements when high levels
of intangible capitalisation had occurred (Ely and Waymire, 1999). In contrast, Barth and Clinch (1998) found a positive effect on share price when the value of accounting intangibles is re-stated. A correlation was found between high levels of intangible assets, as measured by Tobins Q, and sustained profitability, but also sustained losses, as firms either locked into a sustainable competitive position through their intangible assets or sustained losses through a loss of reputation (Villalonga, 2004). A contemporary similar study reinforced the influence of IC on capital markets by including additional IC factors like IT investments and expenditure and patents per employee (Ghosh and Wu, 2007). One policy response to the issue of intangibles is to propose greater ICD. ICR is being promoted as a major vehicle for informing market actors of the value inherent in IC intensive firms (Boedker et al., 2007). As early as 1995, Skandia insurance had attached an IC supplement to their annual report. The increasing importance of intangibles was identified by Swedish researcher Sveiby and Risling (1986) in his seminal work on “Company Knowhow”. Since this time a plethora of literature has been published in support of methods for measuring and managing intangibles (Edvinsson and Malone, 1997; Johanson et al., 1999; Lev, 2001; Sveiby, 1997; Guthrie and Ricceri, 2009). From Sveiby’s (1997) intangible asset monitor and Kaplan and Norton’s (1996) balanced scorecard, increasingly sophisticated scorecards have been developed (Liebowitz and Suen, 2000; Mouritsen et al., 2001; Wall and Doerflinger, 1999). IC has been decomposed into subsidiary concepts like structural capital, human capital, customer capital, innovation capital, external capital, stakeholder capital and knowledge capital for the purposes of measurement and reporting for management. The Organisation for Economic Co-operation and Development (OECD) (1999) commissioned several projects to explore the spread of ICR across several continents. More recent developments have recognised that IC metrics alone are not effective in communicating value propositions to the marketplace. The Danish Government has published guidelines for ICR which encourage the inclusion of “knowledge narratives” to better communicate value creating challenges and initiatives (Mouritsen et al., 2002; Mouritsen, 2003; Pakhus, 2000). Despite the significant development activities around ICR, anecdotally it appears that progress has slowed. The anticipated increase in ICR, following Skandia’s lead, has not eventuated. Attempts to develop single indices for intangible asset performance (Lev, 1999; Bontis et al., 1999) have also struggled to gain acceptance. This lack of progress led Johanson (2003) to report on potential reasons for market actors’ ambivalence to IC information. He offers five primary reasons: a lack of understanding of intangibles; a lack of trust in the measures; an exaggerated risk of losing the intangible resource; a lack of confidence in management to take action with respect to intangibles and the mentality of market actors to softer intangibles (i.e. their fixation on numbers). Also, Holland (2003) points to a rift between what company executives and fund managers and analysts believe is relevant IC information. Holland (2003, p. 46) considers that dysfunctions in the information value chain from company executive to market actors are presenting real barriers to progress with IC. ICR has both an internal and external effect. The internal reporting aspects of the balanced scorecards and/or intangible asset monitors can provide support to effective management decision making (Kaplan and Norton, 1992; Sveiby, 1997).
Visualising and measuring IC
7
JIC 11,1
8
The external reporting aspects of ICR can contribute to the externally focused executive management element in influencing market actors (Unerman et al., 2007). From a market perspective, fund managers are seen to have a major influence over a firm’s market valuation (Abhayawansa and Abeysekera, 2009). Fund managers are competing for “private” information that might better inform their investment decisions. Holland (1999, p. 15), in studying the information acquisition habits of fund managers, found that they are particularly interested in relationship/social capital information: Fund Managers were very interested in how companies managed their relations with customers and suppliers, and how they exploited customer loyalty, company brands, trademarks, distribution channels, advertising, reputation and image with customers. They were very interested in how these market based intangibles created competitive position in the market place and how this was expected to contribute to shareholder value.
Looking beyond IC statements to the broader issue of disclosure, researchers are now considering both complementary and alternative means for “disclosing” future value creation information to the marketplace (Boedker et al., 2008). From an accounting perspective the financial reporting standards for intangibles are inadequate and lead to a gross understatement of their value (Lev, 2001). The lack of disclosure on intangibles is seen as facilitating insider trading through privileged access to information by some market actors (Holland, 1999; Lev, 2001; Wallman, 1995). However, what to disclose is somewhat problematic. There is general agreement that ICR should take a narrative form and describe the value creation story for the firm (Boedker et al., 2007). IC reports are now looking to lead with the value creation “story” supported by IC metrics, the reverse situation to the balance sheet and balance sheet notes (Mouritsen et al., 2002; Bukh, 2003; Pakhus, 2000). The above ICR methods call for firms to take a conscious decision to specifically report on IC performance. A research theme has developed around measuring or assessing the level of ICR inherent in a firm’s regular reporting. CA has been used to analyse company annual reports, looking for evidence of ICR against the accepted dimensions of internal, external and human capital (Bozzolan et al., 2003; Goh and Lim, 2004; Guthrie and Petty, 2000). One advantage of this research method is the ability to detect changes in ICR over time. This enables all firms and only those that choose to report specifically on IC to be assessed. It also has the potential to provide some level of benchmarking, though the current research has mostly been limited to country level comparisons. However, Marr and Chatzkel (2004) indicated that ICR, as it has been practiced to date, is at something of a crossroads. Awareness raising through active publishing in both the scholarly and business press has created a demand for intangible reporting methods and tools. Difficulties still exist with taxonomic definitions of IC and standard measures for IC (Abeysekera, 2006). Also, Lev (2001) argued that for IC measures to be useful for capital markets they need to exhibit several key criteria: be quantitative in nature; permit inter-firm comparison; be empirically linked to corporate value; and be representative of an agreed comprehensive formulation for IC. These criteria are now used to assess IC measurement approaches for capital markets. First, the value of the ICD approach can be framed in terms of a diffusion of innovation. Adopters of new “technologies” can be classified as: innovators – about 2.5 per cent of users; early adopters – about 13.5 per cent of users; early majority – about 34 per cent of users; late majority – about 34 per cent of users and laggards – the last 16 per cent of users (Rodgers, 1962). Rodgers identified
innovators as the more adventurous, sometime daring entities who have the ability to understand complex “technical” knowledge and cope with a high degree of uncertainty. This appears an apt description for firms like Skandia who have adopted the ICD approach. In fact, a 2.5 per cent adoption rate for ICD would be generous when describing the current state of ICD adoption. The lack of a standard framework for ICD means it fails the second criteria of enabling inter-firm comparisons. Second, the intangible asset measurement approach makes use of pre-existing data and therefore is not reliant on individual firms adopting a new practice. It is, however, reliant on finding existing data that best represents IC. To some extent it is also reliant on financial data aggregators to source IC relevant information from their client base. To date, data aggregators have not explicitly sought out IC relevant information. Data like R&D and advertising expenditure, which have been used as IC proxies, have been included for other reasons and simply exploited opportunistically by IC researchers. As such, the approach fails the fourth criteria of usefulness in that it is incompletely representative of current IC formulations. 3. Research method The following research method has been designed to enable studies of IC impacts on capital markets that meet all four of the criteria nominated by Lev (2001) as being useful to capital market actors. The method avoids the ICD issue of needing to achieve a given adoption rate to become useful, while at the same time sustaining the richness of a comprehensive IC formulation provided by ICD research. Like the intangible asset measurement approach, it relies on pre-existing data that have not explicitly been derived for IC purposes. However, the data exist for a majority of firms participating in capital markets. The core technique applied is a form of CA. The data are sourced from publicly available business media. CA is a technique used to systematically analyse information sources for communication themes or patterns (Guthrie and Abeysekera, 2006). Typically, the technique calls for “human” coding of concepts of interest that can be found in textual reports or publications. Once coded, the concepts can be weighted for relevance and summed to come up with a quantitative measure for the concept. Historically, the technique has been popularly used to track topics and trends in the public literature. The power of the approach is that it provides one of the few techniques available for analysing textual repositories, which make up by far the majority of business communications. Krippendorf (2004) identified stability, reproducibility and accuracy as key reliability measures. On the negative side, the objectivity of the technique can be questioned given that the human coder is susceptible to personal biases when performing the coding process. This susceptibility can be mediated to some extent by using multiple coders and performing inter-coder analyses to assess the consistency with which the task is being performed (Neuendorf, 2001). Another potential weakness is the potential substitution of quantity for quality, where frequency counts do not discriminate the quality of the classified unit. Again, the introduction of weighting schemes could mediate this weakness to some extent, though at the same time introduce other classification challenges with respect to how weights are assigned (Guthrie et al., 2004). In recent times, CA has been used to assess the degree of disclosure of IC components that companies are making in their annual reports (Guthrie et al., 2004).
Visualising and measuring IC
9
JIC 11,1
10
The method has been replicated by up to 40 studies to assess the level of ICR in different countries around the world (Guthrie and Ricceri, 2009). The authors provide several insights into performing CA. These include: . categories of classification need to be clearly and operationally defined to minimise ambiguous classification opportunities; . information needs to able to be quantified; . coding process needs to be objective and repeatable by different coders; and . a unit of analysis is required (e.g. words, phrases, sentences, paragraphs, portions of a page, inclusive or exclusive of pictures). CA is seen as a “labour intensive” endeavour, as large quantities of textual information need to be manually coded and checked for accuracy and consistency. Computerised support for CA can therefore be an attractive proposition. Computer-assisted CA is also the topic of substantial technological development (Krippendorf, 2004; Neuendorf, 2001), as electronic search engines look to improve their “classification” of electronic content to provide more accurate and reliable search results from unstructured textual repositories. Computerised CA tools can be divided into several categories. First, dictionary key word-based tools largely classify text pieces according to the presence of matching text to the concept of interests. Second, concept-based tools are a sophistication of the key word-based tools in that they develop their own dictionary of “concepts” that can be represented by multiple phrases or words such that accurate concepts can be identified that do not contain the specific words in the search terms. Third, classification assistants include tools like Nvivo[1] which are popular with researchers using qualitative research methods and who have a need to manage the collection and capture of large amounts of interview scripts. Fourth, electronic taxonomies are similar to concept-based tools in that they are developed from an analysis of existing content, looking for the best “descriptors” that can be applied to a given body of text. Taxonomies are hierarchical structures with the more abstract terms being closer to the top of the hierarchy and more specialised or descriptive terms being found lower down in the hierarchy. Taxonomies provide a navigation aid for those wanting to explore a body of text. By navigating the taxonomy, users can drill down from quite abstract concepts through to quite specific topics. According to Krippendorf’s (2004) CA criteria of stability, reproducibility and accuracy, computer-based CA tools could be seen as strong in terms of stability and reproducibility. With a given body of text, computer-based CA tools should provide the same repeatable result without fault. It is in the area of accuracy that computer-based tools are seen to be deficient. Artificial intelligence technologies have yet to deliver the capacity for the textual understanding levels that humans are capable of. Currently, little CA-based research has relied on computer-based CA. One exception is a study by Bontis (2003), who used electronic CA to identify ICD levels for Canadian corporations. However, the low levels of disclosure found may have been attributed to the inability of the electronic search used to identify disclosures, which did not contain an exact textual match to the terms being searched for (Beattie and Thomson, 2005). Beattie and Thomson (2005) also raised concerns about the non-standard categories used to measure ICD levels and went on to demonstrate how the level of disclosure is related to the number of textual concepts provided for each category of interest.
Using ICD activity as an example, if the absolute level of ICD is desired, for example, to compare changes in disclosure patterns over time, CA requires a standardised and consistent categorisation of IC that all researchers can use. Coding methods would also need to be standardised. On the other hand, if ICD CA is being used to compare between firms, market sectors or countries, accuracy becomes less of a concern (within limits), while consistency of categorisation and coding becomes important. In other words, even if the CA method under-represents IC, it does so in a systematic way, which will not impact the validity of comparisons between different entities. Of course, the CA technique needs to do a reasonable job of identifying IC elements or at least the researcher would need to be able to quickly identify and remove “false hits”. For this method, an independent source of information provided by content aggregator Factiva (www.factiva.com, accessed 8 April 2009) is used. While no news or business information source could be considered entirely objective, given the range and number of articles available in the Factiva information base, it was anticipated that biases of individual reporters would be minimised. It was also anticipated that both “good” and “bad” news items would be contained in the information base, giving a more balanced perspective, than relying on voluntary corporate disclosures, which are selective, often only disclosing positive information about a firm. The standardised approach for conducting CA in IC areas has been to define a set of descriptor terms for the elements of human capital, internal capital and external capital. The human coder would use these terms to identify IC elements within the source documents, usually an annual report. The use of electronic classification of source documents, it has been argued, is vastly inferior to the human coder in being able to identify appropriate concepts in the text (Beattie and Thomson, 2005). However, this criticism was levelled at simple text matching searches. It has already been argued that it is the relative “between firm” measures that are important as opposed to accurately identifying absolute values of concept identification. However, to at least improve the accuracy of the classification search, Factiva’s intelligent taxonomy terms were used. Factiva makes use of both concept level searches and a pre-defined electronic taxonomy. This enabled more consistent identification of the IC concepts. CA, both manual and computerised, has several identified limitations (Guthrie and Abeysekera, 2006; Hussey and Hussey, 1997; Krippendorf, 2004; Silverman, 1993). For manual CA, limitations have been identified as including the following: the risk of human coders introducing personal bias in assessing content; the risk of inconsistent applications of coding methods used; sensitivity to the nature and number of key terms selected to represent the concept being analysed in the text; sensitivity to the sources used for the CA; the limited volume of text that can be effectively analysed manually; and difficulty in assuring the ability to replicate studies. Computer-assisted CA can introduce a degree of consistency of processing but introduces limitations in its ability to assess text with the same degree of accuracy as the human coder. While the above limitations are acknowledged, for studies looking to draw from large, distributed and largely qualitative data sources, there appears to be few alternative approaches available (Krippendorf, 2004). The following sections describe the CA process used to develop the quantification of IC components, human, internal and external capital. In the project, the IC variables are generated using the Factiva computer-assisted CA method. The use of the Factiva
Visualising and measuring IC
11
JIC 11,1
12
intelligent taxonomy terms is important in maximising the accuracy of the classification when using the electronic search. Factiva generates and manages a fixed set of taxonomy terms that is used to classify all documents in their database. Automated methods are used to assist in the classification. It is anticipated that some human supervision of the automated methods would occur. But for the larger part, the automation would assist in the consistency achieved, while the human supervision would correct gross errors. The exact details of the Factiva intelligent taxonomy are proprietary and not available in the public domain. A mapping was therefore required between accepted IC terms and the Factiva intelligent taxonomy terms. The IC classifications developed by Guthrie and Petty (2000) were mapped to terms contained within the Factiva intelligent taxonomy terms set. The mapping of terms is shown in Table I. One does not need to be constrained by the presented IC formulation. Other IC formulations can be accommodated by mapping to the intelligent taxonomy in a similar way. One can see from the above table, the Factiva taxonomy terms are more expansive than the IC terms. For example, Factiva terms like management moves and executive pay are human capital terms that only loosely map to the Guthrie and Petty (2000) terms. Additionally, the Factiva terms also identify articles that contained synonyms to the stated terms. The Factiva electronic search results therefore do not suffer the shortcomings of basic keyword searches. The Factiva electronic search approximates the IC discovery levels of a human coder, but with the consistency afforded by computer-based searches. The method for developing a measure for IC components followed a four-step process, as shown in Figure 1.
IC classification equivalence (Guthrie and Petty, 2000)
Factiva intelligent taxonomy terms
Human capital Employee, education, training, work-related knowledge, entrepreneurial spirit
Employee training/development Workers’ pay Labour disputes Lay-offs Recruitment Directors’ dealings Executive pay Management moves Intellectual property Best practice Competitive intelligence Corporate governance/investor relations Corporate process redesign Knowledge management Supply chain Information technology Debt/bond markets Marketing Joint ventures Contracts/orders Profiles of companies Society/community/work
Internal capital Intellectual property, management philosophy, corporate culture, management processes, information/ networking systems, financial relations
Table I. Mapping of IC terms to Factiva intelligent taxonomy terms
External capital Brands, customers, customer satisfaction, company names, distribution channels, business collaborations, licensing agreements
Step 1
Identify firms from the market sector of interest
Step 2
Search factiva news base for each firm and the factiva taxonomy terms for human capital, external capital and internal capital
Step 3
Score each article as ‘Positive’, ‘Neutral’ or ‘Negative’
Step 4
Compile data set for human capital, external capital and internal capital by firm
The first step is to identify each firm with a Factiva company record. The second step takes the Factiva taxonomy terms for each of human capital, internal capital and external capital and searches for articles involving the selected firm. The third step is to classify each story as “positive or neutral” and “negative”. The final step is to calculate the index for each of human, internal and external capital, for each firm, using a method described later in this section. One can see that, unlike ICD analyses, the IC measures do not rely on counting concepts within a single document, like an annual report. Because of the large number of documents available, the level of IC content was only used to select a document for inclusion in a “document count” as representative of the attribute of interest (e.g. human capital). Single documents may occur in more than one IC element (e.g. if the document contains information about both human capital and external capital). By raising the level of the CA to the document level, sensitivity to the IC classification mode was lowered as the IC measure was spread across several, rather than a single, document. The challenge still exists for developing an algorithmic scheme for developing the measures from document counting. It was observed that most news and information articles were generally of a positive nature, though significant negative news existed and were likely to have a greater impact on firm perception than the more regular positive news (Kopalle and Lehmann, 1995; Novaes, 2002). This effect is also shown through the impact of good and bad news on stock price movements (Dean and Faff, 2004; Goeij and Marquering, 2004). The scoring algorithm, therefore, weights negative stories at twice the impact of positive news. The scoring algorithm was as follows: X ðPi 2 2*Ni Þ Xi ¼ i
Visualising and measuring IC
13
Figure 1. IC measurement process
JIC 11,1
where: X – The IC attribute measure for firm i. Pi – Positive (or neutral) articles for firm i. Ni – Negative articles for firm i.
14
The general tenet used is that the existence of a news or business information report is considered a positive contributor unless its main purpose is to highlight negative news. For articles which contained both positive and negative news, the dominant tenet of the article was used. No attempt was made to normalise the score-based on the number of articles identified, as the level of news coverage was seen as directly related to a firm’s IC presence. As an additional test for the adequacy of the above algorithm, sensitivity analyses can be conducted separately for positive articles, negative articles and total articles to assess the robustness of the weightings selected. This is achieved through correlating the raw story count (i.e. no use of the “good story, bad story” calculation) and the index scores looking for differences that could not be justified from a qualitative assessment of the story sets. 4. Application to global IT market To illustrate how the research method can be applied, the global IT market was chosen. This sector is still a relatively young sector, with a history of less than 40 years. However, in this time, it has demonstrated dynamic growth and a heavy reliance on IC, and hence is an ideal candidate for demonstrating the application of the method. A sample of some 156 firms from the global IT sector was selected. To facilitate comparisons with prior research, the model of IC developed by Ghosh and Wu (2007) was adapted for use here. The Ghosh and Wu (2007) formulations use market to book ratios and Tobins Q measures for firm valuation. They then formulate a model for a firm’s structural capital information comprised of investments in IT, R&D expenditure, patents per employee, together with return on investment (ROI), b values and earnings growth rate as control variables. The sample used consisted of electronic firms in Taiwan between 2001 and 2002. For this exercise a similar single sector was used, with a marginally larger sample. The data were collected over a longer period, from 2001 to 2004. The same dependent variable of Tobins Q was selected, calculated in the same way (Chung and Pruitt, 1994). For the independent variables, internal capital (IntC), human capital (HC) and external capital (EC) are included, along with an R&D index determined using the same computer-assisted CA technique as for the IC attributes. ROI is included as the only control variable. The full model used is: TobQit ¼ b0 þ b1 ROIit þ b2 RESit þ b3 ECit b4 HCit b5 IntCit þ eit TobQit – Tobins Q for firm i in year t. ROIit
– Return on investment for firm i in year t.
RESit
– R&D index for firm i in year t.
ECit
– external capital index for firm i in year t.
t ¼ 2001 2 2004
Visualising and measuring IC
HCit
– human capital index for firm i in year t.
IntCit
– internal capital index for firm i in year t.
eit
– other value relevant information of firm i in year t.
According to the methodology, IC indices for EC, HC and IntC were created for each of the 156 firms. The sample comprised pooled cross-sectional time series data from 2001 to 2004 resulting in 624 observations. The following Figures 2-4 show how the IC factors break down according to the Factiva intelligent taxonomy mappings. Note that the IntC index is somewhat representative of Ghosh and Wu (2007) use of IT investments and patents (intellectual property). The HC index is additional to the Ghosh and Wu (2007) formulation. The EC index is also additional to the Ghosh and Wu (2007) formulation. One can see that the popular categories are influenced through relative “newsworthiness”, especially in the case of HC where management moves and lay-offs were the most newsworthy elements but would not align with what a firm would consider representative of its HC. On the other hand, it is this information that is most exposed to the investor public and potentially more influential on share price valuations.
15
Debt/bond markets Information technology Supply chain Knowledge management Corporate process redesign Corporate governance/investor relations Competitive intelligence Best practice
Figure 2. Internal capital stories breakdown
Intellectual property 0
500
1,000
1,500
2,000
2,500
Management moves Executive pay Directors dealings Recruitement Lay-offs Labour disputes Workers pay Employee training/development 0
200
400
600
800
1,000
1,200
1,400
1,600
Figure 3. Human capital stories breakdown
JIC 11,1
Corporate crime Corporate credit rating Corporate social responsibility Market research
16
Corporate sponsorship Brand Joint ventures
Figure 4. External capital stories breakdown
Advertising 0
500
1,000
1,500
2,000
2,500
The IC indices developed did not meet the test for normality and therefore needed to be transformed for multivariate analysis. Both log and/or inverse transformations were trialed for each of the variables (Tabachnick and Fidell, 2001). Largely these traditional transformations were not successful in normalising the data. Given the nature of the distributions and the presence of several extreme outliers with many of the variables, rank transformations were used. While there is some loss in statistical power with rank regressions, monotonically increasing/decreasing distributions with the presence of outliers lend themselves to the use of rank transformations (Iman and Conover, 1979). The Pearson correlation matrix is shown in Table II. The regression results achieved are shown in Table III. Multicollineariality was not a problem, with variance inflation factors for all variables being low (ranging from 1.1 to 1.7). While the intent of the above analysis is to illustrate the use of the methodology more so than to promote findings for the regression analyses, one can make some simple comparisons with the Ghosh and Wu (2007) results. Similarities exist in that a statistically significant model is achieved (F ¼ 27.20, p ¼ 0.000, adj. R 2 ¼ 0.198). Likewise, ROI (t ¼ 9.080, p ¼ 0.000) and R&D (t ¼ 5.625, p ¼ 0.000) are the strongest predictors of Tobins Q. This study also shows that HC (t ¼ 2.205, p ¼ 0.028) and EC (t ¼ 2.552, p ¼ 0.011) are significant predictors of Tobins Q at the p , 0.05 level. Where the results vary from the Ghosh and Wu (2007) study is in the statistically negative IntC prediction for Tobins Q (t ¼ 26.854, p ¼ 0.000). Tobins Q
Table II. Correlation coefficients
Tobins Q 1 ROI 0.310 * R&D 0.145 * HC 0.093 * * IntC 20.098 * * EC 0.092 * *
ROI
R&D
HC
IntC
EC
(0.000) 1 (0.001) 20.140 * (0.001) 1 (0.029) 0.371 * (0.000) 0.106 * (0.010) 1 (0.021) 0.349 * (0.000) 0.181 * (0.000) 0.556 * (0.000) 1 (0.030) 0.283 * (0.000) 0.154 * (0.000) 0.358 * (0.000) 0.544 * (0.000) 1
Note: Correlations are significant at *0.01 and * *0.05 levels, respectively (two-tailed)
Pit variables ROI RES HC IntC EC F-statistic Adj. R 2 N
Coefficient 0.372 * 0.220 * 0.091 * * 20.323 * 0.103 * * 27.20 * (0.000) 0.198 532
t-statistic ( p-value) 9.080 5.625 2.205 2 6.854 2.552
(0.000) (0.000) (0.028) (0.000) (0.011)
Notes: p-value significant at * , 0.01 and * * , 0.05, respectively (two-tailed); unstandardised coefficients and p-values
A detailed investigation of this somewhat surprising result is beyond the scope of this paper. The effect has been investigated elsewhere (Lock Lee, 2007), with an explanation found in the interaction effects with a firm’s financial soundness. In essence, it was found that the market penalised firms who were investing in IntC when they did not have the financial resources to support such investments. The sample period chosen was the post dotcom bust, with many IT firms making a loss during this period. The above analysis illustrates that the research method has successfully met the four criteria of usefulness in that it is quantitative in nature, allows inter-firm comparisons, demonstrates a connection to corporate value and makes use of a comprehensive representation of IC. The comparative analysis example also illustrates the more detailed research into IC effects on firm performance that the method could facilitate. 5. Discussion and conclusions The use of advanced CA techniques has been instrumental in enabling the study of IC and firm performance across a relatively large sample of firms. Now IC researchers have the opportunity to analyse market sectors without sacrificing the fidelity of their chosen IC formulation. The growing capability of computer-assisted CA techniques both reduces the labour intensity of traditional manual CA and also improves verification of results through reducing the potential for human bias. The opportunity for human bias still exists in the form of human classification of the reports as “good”, “bad” or “neutral”, though the impact, as assessed by sensitivity trials, is minimal. However, research methods come with limitations, as well as benefits. One limitation with CA techniques is the validity and authenticity of the content being analysed. The quality and accuracy of the reporting in individual articles could be questionable. Firms with a strong media following are less likely to be impacted by a wayward reporter, as the weight of accurate reports will mask inaccurate reports. However, firms with a low level of media coverage could be susceptible to a single inaccurate report. The technique also does not overcome the issue of the lack of a standard definition of IC (Abeysekera, 2006). The mapping to the Factiva intelligent taxonomy terms also introduces a potential for diversion from the intended IC definition. However, the method is easily adaptable to alternative IC formulations. As long as the IC indices are used for comparison purposes, rather than absolute IC measures, any errors in classification made by the computerised search engine will be systematic across the whole sample, and therefore not impact comparative analyses.
Visualising and measuring IC
17 Table III. Regression test results
JIC 11,1
18
When addressing the IC attributes of a firm, for the internal and human capital elements, the CA can only provide what has been made visible by business reporters. This may be impacted by a firm’s public affairs policies and therefore not accurately represent a firm’s internal and human capital. Additionally, the reported aspects of human and internal capital are limited to “newsworthy” elements, like senior executive movements or competitive intelligence stories and less newsworthy elements like staff qualifications or a successful new internal business process are not addressed. Again, these limitations are likely to be greater for firms with a modest or non-existent media following. This research has relied on the empirical results obtained to argue for the validity of this research approach. There are several opportunities for future research emanating from this work. Methods research comparing the different computer-supported CA processes with manual processes might add useful insights into the current limitations of the approach. Methods research could be undertaken to help put some practical boundaries on the viable use of the different computer-assisted CA technologies. Additionally, a method for assessing the utility of new search technologies, in terms of utility and validity, would be valued by researchers using CA. In conclusion, this paper has described a research method for the measurement and analysis of IC across market sectors. The research method provides new utility to the popular CA techniques that have emerged from ICD research. It also exhibits the benefits of being able to address large numbers of firms without the liability of needing to use narrow proxy representations of IC. An illustration of the method is used to provide an analysis of the impact of IC on market to book values for the global IT market sector. Note 1. Nvivo is designed for qualitative researchers who need to combine subtle coding with qualitative linking, shaping, searching and modelling. www.qsrinternational.com/products/ productoverview/NVivo_7.htm (accessed 8 April 2007) update. References Abeysekera, I. (2006), “The project of intellectual capital disclosure: researching the research”, Journal of Intellectual Capital, Vol. 7 No. 1, pp. 61-77. Abhayawansa, S. and Abeysekera, I. (2009), “Intellectual capital disclosure from sell-side analyst perspective”, Journal of Intellectual Capital, Vol. 10 No. 2, pp. 294-306. Ball, R. (1992), “The earning-price anomaly”, Journal of Accounting and Economics, Vol. 15, pp. 319-45. Barron, O.E., Byard, D., Kile, C. and Riedl, E.J. (2002), “High technology intangibles and analysts’ forecasts”, Journal of Accounting Research, Vol. 40 No. 2, pp. 289-312. Barth, M. and Clinch, G. (1998), “Revalued financial, tangible, and intangible assets: associations with share prices and non-market-based value estimates”, Journal of Accounting Research, Vol. 36, pp. 199-233 (supplement). Beattie, V. and Thomson, J. (2005), “Lifting the lid on the use of content analysis to investigate intellectual capital disclosures in corporate annual reports”, paper presented at the 1st Workshop on Visualising, Measuring and Managing Intangible and Intellectual Capital, 18-20 October, Ferrara University, Ferrara.
Beaver, W. (2002), “Perspectives on recent capital market research”, The Accounting Review, Vol. 77 No. 2, pp. 453-74. Boedker, C., Binney, D. and Guthrie, J. (2007), New Pathways to Prosperity: International Trends and Developments in Extended Performance Management, Measurement and Reporting, Society for Knowledge Economics, Sydney. Boedker, C., Mouritsen, J. and Guthrie, J. (2008), “Enhanced business reporting: international trends and possible policy directions”, Journal of Human Resource Costing & Accounting, Vol. 12 No. 1, pp. 14-25. Bontis, N. (2003), “Intellectual capital disclosure in Canadian corporations”, Journal of Human Resource Costing & Accounting, Vol. 7 Nos 1/2, pp. 9-20. Bontis, N., Dragonetti, N.A., Jacobsen, K. and Roos, G. (1999), “The knowledge toolbox: a review of the tools available to measure and manage intangible resources”, European Management Journal, Vol. 17 No. 4, pp. 391-402. Bozzolan, S., Favotto, F. and Ricceri, F. (2003), “Italian annual intellectual capital disclosure: an empirical analysis”, Journal of Intellectual Capital, Vol. 4 No. 4, pp. 543-58. Bukh, P.N. (2003), “The relevance of intellectual capital disclosure: a paradox?”, Accounting, Auditing & Accountability Journal, Vol. 16 No. 1, pp. 49-56. Bukh, P.N., Larsen, H.T. and Mouritson, J. (2001), “Constructing intellectual capital statement”, Scandinavian Journal of Management, Vol. 17, pp. 87-108. Chatzkel, J. (2004), “Commentary: moving through the crossroads”, Journal of Intellectual Capital, Vol. 5 No. 2, pp. 337-9. Chauvin, K. and Hirschey, M. (1993), “Advertising, R&D expenditures and market value of the firm”, Financial Management, Vol. 22, pp. 128-40. Chung, K. and Pruitt, S. (1994), “A simple approximation of Tobin’s Q”, Financial Management, Vol. 23 No. 3, pp. 70-4. Collins, D.W., Maydew, E.L. and Weiss, I.S. (1997), “Changes in the value-relevance of earnings and book values over the past 40 years”, Journal of Accounting and Economics, Vol. 24 No. 1, pp. 39-67. Dean, W. and Faff, W. (2004), “Asymmetric covariance, volatility, and the effect of news”, The Journal of Financial Research, Vol. 27 No. 3, pp. 393-413. Edvinsson, L. (1997), “Developing intellectual capital at Skandia”, Long Range Planning, Vol. 30 No. 3, pp. 366-73. Edvinsson, L. and Malone, M.S. (1997), Intellectual Capital – Realizing your Company’s True Value by Finding its Hidden Brainpower, Harper Business, New York, NY. Ely, K. and Waymire, G. (1999), “Intangible assets and stock prices in the pre-SEC era”, Journal of Accounting Research, Vol. 37, pp. 17-44. Francis, J. and Schipper, K. (1999), “Have financial statements lost their relevance”, Journal of Accounting Research, Vol. 37 No. 2, pp. 319-52. Ghosh, D. and Wu, A. (2007), “Intellectual capital and capital markets: additional evidence”, Journal of Intellectual Capital, Vol. 8 No. 2, pp. 216-35. Goeij, P.D. and Marquering, W. (2004), “Modelling the conditional covariance between stock and bond returns: a multivariate GARCH approach”, Journal of Financial Econometrics, Vol. 2 No. 4, pp. 531-64. Goh, P.C. and Lim, K.P. (2004), “Disclosing intellectual capital in company annual reports”, Journal of Intellectual Capital, Vol. 5 No. 3, pp. 500-10.
Visualising and measuring IC
19
JIC 11,1
Guthrie, J. and Abeysekera, I. (2006), “Using content analysis as a research method to inquire into social and environmental disclosure: what is new?”, Journal of Human Resource Costing & Accounting, Vol. 10 No. 2, pp. 114-26. Guthrie, J. and Petty, R. (2000), “Intellectual capital: Australian annual reporting practices”, Journal of Intellectual Capital, Vol. 1 No. 3, pp. 241-51.
20
Guthrie, J. and Ricceri, F. (2009), “The management of knowledge resources within organisational practices: some international ‘best practice’ illustrations”, in O’Brien, E. and Southern, M. (Eds), Knowledge Management for Process, Organizational and Marketing Innovation: Tools and Methods, IGI, London. Guthrie, J., Petty, R., Yongvanich, K. and Ricceri, F. (2004), “Using content analysis as a research method to inquire into intellectual capital reporting”, Journal of Intellectual Capital, Vol. 5 No. 2, pp. 282-93. Hand, J. and Lev, B. (2003), Intangible Assets: Values, Measures and Risks, Oxford University Press, Oxford. Holland, J. (1996), “Fund management, intellectual capital, intangibles and private disclosure”, Managerial Finance, Vol. 32 No. 4, pp. 277-316. Holland, J. (1999), “Fund management, intellectual capital, intangibles and private disclosure”, paper presented at International Symposium: Measuring and Reporting Intellectual Capital: Experience, Issues and Prospects, Amsterdam. Holland, J. (2001), “Financial institutions, intangibles and corporate governance”, Accounting, Auditing & Accountability Journal, Vol. 14 No. 4, pp. 497-529. Holland, J. (2003), “Intellectual capital and the capital market – organisation and competence”, Accounting, Auditing & Accountability Journal, Vol. 16 No. 1, pp. 39-48. Holland, J. (2006), “Fund management, intellectual capital, intangibles and private disclosure”, Managerial Finance, Vol. 32 No. 4, pp. 277-316. Holland, J. and Johanson, U. (2003), “Value-relevant information on corporate intangibles – creation, use, and barriers in capital markets – ‘between a rock and a hard place’”, Journal of Intellectual Capital, Vol. 4 No. 4, pp. 465-86. Hussey, J. and Hussey, R. (1997), Business Research: A Practical Guide for Undergraduate and Postgraduate Students, Macmillan, London. Iman, R. and Conover, W.J. (1979), “The use of the rank transformation in regression”, Technometrics, Vol. 21 No. 4, pp. 499-509. Johanson, U. (2003), “Why are capital market actors ambivalent to information about certain indicators on intellectual capital?”, Accounting, Auditing & Accountability Journal, Vol. 16 No. 1, pp. 31-8. Johanson, U., Martensson, M. and Skoog, M. (1999), “Measuring and managing intangibles: eleven Swedish qualitative exploratory case studies”, paper presented at the International Symposium on Measuring and Reporting Intellectual Capital: Experience, Issues and Prospects, Amsterdam. Johnson, L., Neave, E. and Pazderka, B. (2002), “Knowledge, innovation and share value”, International Journal of Management Reviews, Vol. 4 No. 2, pp. 101-34. Kaplan, R.S. and Norton, D.P. (1992), “The balanced scorecard – measures that drive performance”, Harvard Business Review, Vol. 70 No. 1, pp. 71-9. Kaplan, R.S. and Norton, D.P. (1996), The Balanced Scorecard: Translating Strategy into Action, Harvard Business School Press, Boston, MA.
Kopalle, P. and Lehmann, D. (1995), “The effects of advertised and observed quality on expectations about new product quality”, Journal of Marketing Research, Vol. 32 No. 3, pp. 280-90. Krippendorf, K. (2004), Content Analysis, 2nd ed., Sage, Thousand Oaks, CA. Lev, B. (1999), “Seeing is believing: a better approach to estimating knowledge capital”, CFO Magazine, February. Lev, B. (2001), Intangibles: Management, Measurement, and Reporting, Brookings Institution Press, Washington, DC. Lev, B. and Sougiannis, T. (1999), “Penetrating the book-to-market black box: the R&D effect”, Journal of Business Finance & Accounting, Vol. 26, pp. 419-49. Lev, B. and Zarowin, P. (1999), “The boundaries of financial reporting and how to extend them”, Journal of Accounting Research, Vol. 37 No. 2, pp. 353-85. Liebowitz, J. and Suen, C.Y. (2000), “Developing knowledge management metrics for measuring intellectual capital”, Journal of Intellectual Capital, Vol. 1 No. 1, pp. 54-67. Liu, J. and Thomas, J. (2000), “Stock returns and accounting earnings”, Journal of Accounting Research, Vol. 38 No. 1, pp. 71-101. Lock Lee, L. (2007), “Corporate social capital and firm performance in the global information technology services sector”, unpublished research, Sydney University, Sydney. Marr, B. and Chatzkel, J. (2004), “Intellectual capital at the crosroads: managing, measuring, and reporting IC”, Journal of Intellectual Capital, Vol. 5 No. 2, pp. 224-9. Mouritsen, J. (2003), “Intellectual capital and the capital market: the circulability of intellectual capital”, Accounting, Auditing & Accountability Journal, Vol. 16 No. 1, pp. 18-30. Mouritsen, J., Bukh, P.N., Larsen, H.T. and Johansen, M.R. (2002), “Developing and managing knowledge through intellectual capital statement”, Journal of Intellectual Capital, Vol. 3 No. 1, pp. 10-29. Mouritsen, J., Larsen, H.T. and Bukh, P.N.D. (2001), “Intellectual capital and the ‘capable firm’: narrating, visualising and numbering for managing knowledge”, Accounting Organizations and Society, Vol. 26 Nos 7/8, pp. 735-62. Neuendorf, K. (2001), The Content Analysis Guidebook, Sage, New York, NY. Novaes, W. (2002), “Managerial turnover and leverage under a takeover threat”, The Journal of Finance, Vol. 57 No. 6, pp. 2619-49. OECD (1999), “Measuring and reporting intellectual capital: experience, issues and prospects”, paper presented at the International Symposium, 9-11 June, Amsterdam, Organisation for Economic Co-operation and Development, Paris, online, available at: www.oecd.org/ document/15/0,2340,en_2649_201185_1943055_1_1_1_1,00.html (accessed 2 April 2007). Pakhus, D. (2000), A Guideline for Intellectual Capital Statements, Danish Agency for Trade and Industry, Copenhagen. Rodgers, E. (1962), Diffusion of Innovation, The Free Press, New York, NY. Silverman, D. (1993), Interpreting Qualitative Data: Methods for Analysing Talk, Text and Interaction, Sage, London. Smithers, A. and Wright, S. (2000), Valuing Wall Street: Protecting Wealth in Turbulent Markets, McGraw-Hill, New York, NY. Stewart, T.A. (1997), Intellectual Capital: The New Wealth of Organizations, Doubleday, New York, NY. Sveiby, K.E. (1997), The New Organizational Wealth: Managing & Measuring Knowledge-Based Assets, Berret-Koehler, San Francisco, CA.
Visualising and measuring IC
21
JIC 11,1
22
Sveiby, K.E. and Risling, A. (1986), Kunskapsforetaget (The Know-How Company), Liber, Malmo. Tabachnick, B. and Fidell, L.S. (2001), Using Multivariate Statistics, 4th ed., Allyn and Bacon, Boston, MA. Unerman, J., Guthrie, J. and Striukova, L. (2007), United Kingdom Reporting of Intellectual Capital, Institute of Chartered Accountants of England and Wales, London. Villalonga, B. (2004), “Intangible resources, Tobin’s Q, and sustainability of performance differences”, Journal of Economic Behaviour and Organization, Vol. 54, pp. 205-30. Walker, R. (2009), “Discussion of Lev, Radhakrishnan and Zhang”, Abacus, Vol. 45 No. 3, pp. 299-311. Wall, B. and Doerflinger, M. (1999), “Making intangible assets tangible”, Knowledge Management Review, September/October, pp. 28-33. Wallman, S. (1995), “The future of accounting and disclosure in an evolving world: the need for dramatic change”, Accounting Horizons, Vol. 9 No. 3, pp. 81-91. Further reading Flo¨strand, P. and Stro¨m, N. (2006), “The valuation relevance of non-financial information”, Management Research News, Vol. 29 No. 9, pp. 580-97. Corresponding author James Guthrie can be contacted at:
[email protected]
To purchase reprints of this article please e-mail:
[email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints