InfoMetrics: a Structural Equation Modeling approach to information indicators and “e-readiness” measurement Dan M. Grigorovici Corina Constantin Krishna Jayakar Richard D. Taylor Jorge Reina Schement Institute for Information Policy College of Communications 0313 B Information Sciences & Technology Building University Park, PA 16802, USA
[email protected] Voice: (814) 863-0988 Fax: (814) 863-6119 Paper proposed for presentation at the 15th Biennial Conference of the International Telecommunication Society Berlin, Germany September 5-7, 2004
Abstract The current paper addresses current widely diffused measurement instruments aiming to measure “e-readiness” across countries and proposes a new tool for measuring Information Society. After a critical review of the current literature and the major macrolevel indices currently used for measuring the “Information Society”, we then propose a new theoretical model based on a latent variable approach. We test this with secondary data and report results of a series of confirmatory factor analysis in an attempt to propose a more sound and valid measurement instrument that can explain as well as provide policy makers with a more valid tool when making decisions that affect technology access and development. We suggest limitations of the current study, avenues for further research, and end with implications of using this approach to both improving measurement validity, making the leap from descriptive to explanatory conceptualizations, as well as proposing a better tool for better business and policy decisions. Keywords InforMetrics, 4C, information indicators, e-readiness, e-metrics, Information Society, Structural Equation Modeling, e-learning, e-commerce, Digital Divide, access, development, Information and Communication Technologies, measurement. Literature review Macro e-metrics: conceptual and methodological issues “While everything matters in the Information Society, not everything matters the same” (Sciadas 2003, 42). Hence the recurrent need to measure to what extent each separate variable is contributing to the ICT development of a country, region, business sector, etc. The quest for “information indicators” (otherwise named “Information Society indicators”, “e-readiness indicators”) stems from this commonsensical realization. Welch (cited in Arquette 2001, 3) mentioned: “It seems to me that an important starting point in our joint exploration of which countries are more advanced as information societies - and which will be in 2002 - is to get clear as to what we are measuring”. Attempts to create “information indicators” date back to the 1960’s with Machlup’s attempt to measure the part of gross national product (GNP) represented by “information goods and services” (Shifflett & Schement 1996, 3), while consistent work in this area took off with Borko and Menou’s work on the “Information Utilization Potential” (IUP) (Shifflett & Schement 1996, 17). To put ICT to effective use, a country must be “e-ready” in terms of infrastructure; the accessibility of ICT to the population at large; and the effect of the legal and regulatory framework on ICT use. If the digital divide is to be narrowed, all of these issues must be addressed in a coherent, achievable strategy that is tailored to meet the local needs of particular countries. Developing-country leaders can use e-readiness assessment to help them measure and plan for ICT integration. It can help them to focus their efforts from within, and to identify areas where external support or aid is required. But an assessment alone is insufficient, and decision makers face two key challenges in making effective use of this tool. First, they need to understand how ICT can help their countries achieve economic and social benefits, and to set realistic goals accordingly. 2
Three factors motivate developing country decision makers to improve e-readiness and promote the adoption of ICT in their countries. First, ICT promises enormous benefits as part of the solution to economic and social problems. Second, countries face the threat of being left further behind if they do not address the growing digital divides both between and within countries. Third, international leaders, foreign donors, and lending agencies are integrating ICT into development and aid programs. Jeskanen-Sundström from Statistics Finland (2001) discussed three approaches to defining new statistical systems to measure business and social changes in the light of ICT developments: (1) the indicator approach; (2) the new economy approach; and (3) the intellectual capital approach. The indicator approach became rooted in the nineties, when a common theoretical framework for ICT statistics was not yet in place. The aim of the indicator approach was to use many different types of indicators to illustrate, in a descriptive way, ICT developments with respect to the technical infrastructure, networks and penetration rates, application services, education, entry into labor market, structure of the ICT sector, production and foreign trade of ICT products, R&D, employment structures in the ICT sector, use of ICT in business and at work, and private homes use and time patterns. This set of indicators is not seen as constant, but changing over time. Moreover, the indicators are being used to compute additional indicators and to monitor social impacts, such as the digital divide. The New Economy approach is focused on economic growth and productivity. The New Economy, seen as having a sustained noninflationary growth with a high level of employment, has emerged parallel to the emergence of ICT. It is therefore obvious to ask whether there is a causal link between the two phenomena of New Economy and ICT advancements. The New Economy approach to a statistical indicator system is, consequently focused on ICT-related data of national accounts, labor market, productivity, measurement of real output, price indices, business statistics, international (cross-border) transactions, digital market, balance of payments, wage and price indices, production output and input, etc. The intellectual capital approach originates in the OECD efforts of the early 1980s, focusing on intangible investments, such as on corporate investments in R&D, marketing, training, software and other immaterial assets. The concept of intellectual capital has developed into the concept of knowledge management, which comprises human capital (know-how, motivation, commitment), intangible capital (data, information, immaterial rights, organization), strategic reserves (capacity to produce innovations and create new products), and social capital (social networks, social intelligence). As of now, no successful proposal has been made to measure intellectual capital within companies, and to compare it across companies, sectors, and nations. Jeskanen-Sundström (2001) pointed out five basic requirements for the production of new statistics: (1) the information need must be identified as clearly as possible, (2) indicators must be measurable, (3) the statistical classifications must be generally accepted, (4) data must be collectable with reasonable effort, and (5) new indicators should have “points of contact” with existing statistical systems. Current measurement efforts: a review1 Various academic institutions, private organizations and commercial publishers issue e-readiness indicators that synthesize this information into e-readiness indexes. Two 1
We did not include all the relevant indexes current in the literature. For a more detailed description and/or analysis of these, see Grigorovici et al. (in press), bridges.org 2001b).
3
recent examples are the “Global Information Technology Report 2003” published by the World Economic Forum and INSEAD, and the e-readiness rankings of the Economist Intelligence Unit, but commonly, e-readiness has been thought of and operationalized in terms of infrastructure readiness. It is imperiously necessary to differentiate between readiness, use, impact, and enabling factors (whether internal, e.g., skills, etc, or external, e.g., policy environment, etc.). The readiness dimension contains two aspects, access and determinants of access. Access to the Internet is possible with computers and other access devices (e.g. mobile phones, PDAs, game consoles etc.); access can be differentiated among different groups of the population, and it is possible with different levels of security. Perceived barriers of costs, security etc. and digital literacy are social factors that determine whether the Internet is actually accessed. The latter also determine the intensity of use (of e-Mail and WWW services as well as of access devices) which can also be measured directly. The impact dimension is more critical. A better approach to assessing the effects of the Internet on European societies, for example, is to bring together Internet access and use variables with dependent variables (e.g. income, productivity) in causal analyses. (SIBIS 2003, 9).
Figure 1: The Networked Readiness Framework. Source: Doutta et al. (2004), 4. Speaking to the discussion above between aggregated and disaggregated measures, Doutta remarks in the Global Information Technology Report 2003: “the complexity of ICT issues in a nation can get obscured behind the numerical score of the NRI” (Doutta et al., 2004, 5). A graphical representation of the Networked Readiness Index (NRI) appears in Figure 1 above. The authors propose that “The Environment component index is designed to measure the degree of conduciveness of the environment that a country provides for the development and use of ICT” (Doutta et al. 2004, 6), but while Infrastructure is a part of Environment as proposed in their measure, it should appear in the Readiness factor, as it does in our conceptualization. It is thus not sure whether it is implicitly double counted; in any case, a construct validity issue appears. One of the most rigorous measurement models proposed, which is also one of the first attempts to provide a firm theoretical basis for the indexes, is the ORBICOM Infostate index/model. The recently released ORBICOM report (Sciadas 2003) attempts to offer “a global set of indicators (infostate) showing how the availability of ICTs and access to networks can be a misleading indicator if it neglects people’s skills, and if ICT networks and skills combined (infodensity) are not matched by a measurement of what 4
individuals, business and countries actually do with such technologies (info-use)” (Sciadas 2003, VIII). The conceptual Framework (shown below in Figure 2) introduces the notion of a country’s “ICTization” or Infostate, as the aggregation of Infodensity and Info-use.
Figure 2: The ORBICOM conceptual model of Infostates. Source: Sciadas 2004, 7. As proposed in Sciadas (2003), “infodensity refers to the stocks of ICT capital and labor, including networks and ICT skills, indicative of a country’s productive capacity and indispensable to function in an Information Society. Info-use refers to the uptake and consumption flows of ICTs, as well as their intensity of use”. Thus, it is differences among countries’ Infostates that constitute the Digital Divide. Since Infostates are dynamic and ever-evolving, the Digital Divide is a relative concept. Any progress made by developing countries must be examined against the progress made by developed ones. Both Infodensity and Info-use contribute to the Digital Divide, with networks and ICT uptake more than other components. Skills, as measured by education indicators, also contribute significantly to the Divide, and this is more the case as we move from generic to more specific measurements. If anything, the lack of better measurements in this area, leads us to underestimate the extent of the Divide. Sciadas (2003) reports that “a close correlation exists between Infostates and per capita GDP”. Thus, the regression analyses performed reveal that for every point increase in Infodensity, per capita GDP increases anywhere between $136 and $164. There are notable exceptions, though. Countries with similar GDPs can have very different Infostates and vice versa. This speaks to the importance of national e-policies and estrategies, implying that their design and implementation matter. Standing as argument for the relative importance of some variables compared to others for different countries (dependent on context), ORBICOM states that: “looking beneath the aggregates of Infodensity and Info-use we can identify specific causes of the Digital Divide which, inevitably, are associated with individual components of measurement. It should be noted from the outset that each and every constituent component of Infostate is partially responsible. This includes networks, skills, ICT uptake and intensity of use. However, the extent to which each of these contributes differs widely both across indicators and across countries” (Sciadas 2003, 32). 5
However, one of the important findings in Sciadas (2003) is that “the positive relationship between the Infodensity and Infostate indices and per capita GDP is nonlinear. Specifically, the cross-sectional data suggest that the impact of the indices on GDP per capita is greater the higher the value of the index” (Sciadas 2003, 84), as shown in Figure 3 below.
Figure 3: Relationship between Infostate and GDP/capita, 2001. Source: Sciadas 2003, 83. Recently added to the list of existing metrics, the International Telecommunications Union’s “Digital Access Index (DAI) measures the overall ability of individuals in a country to access and use Information and Communication Technology”. DAI is comprised of eight variables organized into five categories/factors: infrastructure, affordability, knowledge, quality, and usage (ITU 2003a, b). The findings of the DAI 2003 point to affordability and education as important factors in technology adoption.
Figure 4: Indicators of the Digital Access Index. Source: ITU 2003b. One of the problems that Sciadas (2003) comments upon is that different measurements yield different rankings2 such that the problem of e-readiness measures, once again, is a problem of clarifying definitions for the concepts used, using a firm theoretical model when constructing indicators and resolving the issues discussed above: should we measure by industry sector or overall? Otherwise put, it is probably safe to ask ourselves: to aggregate or to disaggregate? “Aggregation across industries may disguise 2
For a table comparison of two such differences (between NRI and EIU E-Business Readiness Index), see UNCTAD 2003c, Table 1.11, page 14; for a comparison between the existing measurement frameworks, see bridges.org (2003) and UNCTAD (2003a), Table 1, p. 9.
6
some of the impacts of ICT” (OECD 2003, 12), hence, as explained above, the need for a multi-layered model. The prevalent issues discussed above are indicative of the complexity and dynamics of the ICT system. Efforts to define statistical indicators are traditionally based on a deductive approach, where a group of experts identifies the statistical measures of interest. The complexity of the ICT system, however, calls for a hypothesis-driven inductive approach to indicator identification. This approach must be rooted in a model of the ICT system from which the hypothesis classes can be derived. Indicators are then sought which can provide answers to the acceptance or rejection of the hypothesis. (Beroggi 2003, 20). The main advantage of having a “hypothesis, inductive driven approach” to constructing an e-metric such as Beroggi’s (2003) is that the direction of causality between factors and variables can be assessed such that answers to specific hypotheses could be given when empirically testing the model. A sample of questions that can be addressed appears in Table 1 below. From our perspective, we chose SEM as a powerful statistical modeling technique that can address Beroggi’s (2003) imperatives3. It is thus evident when examining Table 1 how important the issue of having reliable macro-level e-metrics is for both policy and business approaches to the issue of ICT development
Table 1: Example of multi-layer e-metric system and possible hypotheses that can be tested using it. Source: Beroggi (2003), 28.
Furthermore, due to its multi-layered approach, depending on the types of questions asked, country or sector (social, business, etc.) that one might be interested in, the general model can then be disaggregated into the layers and variables that are relevant for the specific sector. The advantage of this type of approach is that it does not neglect measures at the levels below the composite index, as previous indicator models. Indicators in general should be “rooted in theory” (SIBIS 2003, 11). “The theoretical model and selection of indicators determine the quality and predictive power of the indices based thereon” (UNCTAD 2003a, 5). Hence, no (or poorly formulated) theoretical model leads to poor index construction and predictive power. A good overview of implied theoretical views that lies at the foundation of most existing indexes appears in UNCTAD (2003a). There have been two types of implicit 3
For a more detailed explanation for our choice of SEM as methodological tool in constructing the InfoMetrics model, see Grigorovici et al. (in press).
7
views on the ICT process (although fully developed theoretical models have not been developed for each): the “sequential” and the “synergistic”. A sequential view on the ICT development process implies “an additive model in which factors with implied equivalence may offset each other. In other words, strength in one aspect can compensate for weakness in another, as above. This is also the perspective within which the idea of “leapfrogging” fits. For instance, Cambodia's lack of fixed mainlines may not matter, as its high mobile penetration rate is likely to offset this, implying “leapfrogging” by “skipping a step” in the sequence” (UNCTAD 2003a, 17). This is the type of view that appears in UNDP’s TAI, or any index that assigns equal weight to every factor. But the reality is that determinants of ICT development do not have the same importance and influence on e-readiness. This means we should have differential weighting based on a host of factors, usually related to “enabling factors”: political, business environment, etc. “Conversely, a synergistic view of a critical mass of associated technologies essential for a country's advancement in technology implies a multiplicative model in which weakness in any one input may hinder and impede effective development on the basis of nonequivalent inputs” (UNCTAD 2003a, 17). Only lately the need for a theoretically driven set of indicators has been affirmed as indispensable to creation of an e-metric, hence there are only few proposals for indexes backed up by a fully developed and explicit theoretical model: Beroggi (2003), ORBICOM (2003), ISIS (2003). Beroggi (2003) for instance, distinguishes between “availability” (“A-factors”) and “opportunity” (“O-factors”) as external factors that can be directly influenced by policy makers, while in turn affecting users directly by influencing their ICT adoption decisions. Also, the preparedness (“P-factors”) is a combination of technical readiness, subjective attitude, and personal characteristics of firms and individuals as the users decide to use or to renounce to ICT. The collection of those factors summarize how prepared the users are to adopt ICT (Beroggi 2003, 22). Preparedness differs from readiness or willingness to adopt ICT. While readiness and willingness already imply a commitment; i.e., a decision, to adopt ICT, preparedness does not. A final class of factors which affect policy objectives and policy decisions are societal values and political aspects. Under the same circumstances, different governments and varying national or European priorities can lead to different policy decisions. A graphical representation of Beroggi’s (2003) conceptual model of the ICT dynamic system that lies at the core of his proposal for development of indicators appears in Figure 5 below.
8
Figure 5: Beroggi’s conceptual model of indicators. Source: Beroggi 2003, 23. The collection of all variables grouped into the three classes (availability, opportunity, and preparedness) comprises the users explanatory variables; i.e., they can help to explain the users’ decision to adopt ICT to different degrees, at different points in time, for different purposes, etc. For example, age and education could be the explanatory variables for the percentage of telework (the users’ decision to work at home) present in a certain industry branch. The users explanatory variables, together with the users’ decisions (e.g., percentage of telework), comprise the impact explanatory variables. For example, the presence of broadband technology and low Internet access costs, as users explanatory variables, together with the degree of telework, as the users’ decision to adopt ICT, could be the impact explanatory variables for savings in office rental costs (as the impact of the users’ decision to do telework). The three sets of factors – A-factors (availability), O-factors (opportunity), and P-factors (preparedness) – affect the users’ decisions to adopt ICT. Dotted links are shown in Figure 7 between the three factor classes, indicating that there might be correlations between some of the factors across the three factor classes. Although the model described above does not explicitly propose a latent variable approach to studying e-readiness indicators and related measures of ICT development, it is clear even from the graphical representations that they are very similar to a path diagram. This approach towards which the three models reviewed above lead point to a similar direction: multi-layered, theoretical based model based on a multivariate approach. This suggests that Structural Equation Modeling could prove a better tool for testing theoretical models in this area. The present paper aims to explicitly propose and test this approach through what we have called the InfoMetrics 4C model. The advantages of this dynamic decision-analytic framework lie in the dynamics (loops, feedback, learning effects, delays, remedies, causal relations, etc.) that are central to the model and can be studied with it. Such a flexible approach could solve the issue of advantages and disadvantages of using absolute or relative scores/indicators when studying ICT development and the possible biases introduced thus. For example, in absolute scores nearly all countries will show increases in telecommunications connectivity. The ITU concluded “it is only by making international comparisons that it is possible to show which policies have been more successful than others. For this reason, an approach based on comparative rankings 9
may be more meaningful than one that uses absolute growth rates” [italics added] (ITU 2003b). The ITU argues that relative growth rates are more insightful for policy analysis than absolute growth rates. Evidence from other studies illustrates some issues that may arise using relative indices. Hence, a decline on an indicator does not imply an actual drop, just that competing countries have advanced faster. Thus, “Germany is considerably closer to other leading nations than to the U.S. and Japan…this distancing is not due to any decline in Germany, but rather to the remarkable gains by the U.S” (UNCTAD 2003a, 18). UNCTAD (2003a) also points out other sources for bias inherent in the indexes that we currently use: the issue of reference points (there are no preset absolute ceiling limits for most variables but usually taking a developed country like US as reference point to say, Cambodia, greatly reduces the latter’s efforts by comparing it to an outlier); unit of analysis, national size effects, data omission effects (whereby lack of data is more an issue for poorer countries so the inexistence of data for those biases the distribution of the final indexes). Also, with a few exceptions that just started to appear (reviewed above), almost all indexes currently used are descriptive, rankings-based, which decreases the explanatory power of the measure4. For instance, “indices are not capable of determining or quantifying causation, for which more sophisticated statistical techniques are required.[…] Indices provide a ready means of measuring a standard set of “symptoms”, rather than their wider, more complex “causes” (UNCTAD 2003a, 6). Interestingly, as discussed above, there is a trend towards starting to employ more advanced approaches, and Beroggi (2003) even redefined some of the classical measures (OECD, etc.) in terms of the latent model implied by them, thus making the first step towards redesigning a new approach to measurement. A graphical representation of the OECD ICT adoption implied model and the direction of causality between variables by using a path diagram has been shown in Beroggi (2003), as in Figure 6 below.
Figure 6: OECD model from the perspective of an SEM-based approach. Source: Beroggi (2003), 53. Also, the US Digital Divide model that is implied in “A Nation Online” (US 2001) can be represented as a path diagram such as the one in Figure 7 below: 4
A more detailed explanation of the methodological reasons descriptive measures are low in explanatory power, as well as of benefits associated to a latent variable approach and its usefulness to research, business and policy, appears in Grigorovici et al. (in press).
10
Figure 7: “A Nation Online” model from the perspective of an SEM-based approach. Source: Beroggi (2003), 57. It is to this purpose that we now propose our InfoMetrics 4C model, base it on a Structural Equation Modeling approach, and report empirical findings. The InfoMetrics 4C conceptual model “If it is time to measure the information society, it is also time to rethink traditional indicators” (ITU 2003b, 6). As discussed above, current research in ICT indicators suffers from an acute lack of a reliable measurement instruments rooted in a theoretical model. There is an acute lack of theoretically driven indexes, as only recently several conceptual models started to be sought. Among them, the e-Europe-related ones (Beroggi 2003, ISIS 2003), ORBICOM (Sciadas 2003, 2004) are the most developed theoretically. (See a representation of Beroggi’s model in Figure 5 above). E-Europe SIBIS indicators (an exhaustive list of indicators appear in SIBIS 2003, 16-21) are part of the most complex theoretical model used to choose a set of indicators, but it is not clear whether it could be successfully operationalized and tested empirically as part of the same SEM, due to the large number of variables included and the complexity of the model5.
5
Testing the SIBIS (2003) model with US data is currently underway under the InfoMetrics research program by the present authors.
11
Figure 8: the SEAMATE model of IT linkage. Source: ISIS 2003, 10. Choucri (2003) shows the usefulness of such an approach in that it can accommodate different foci based on different objectives or sectors (policy, business, research) for which different sets of variables could be more relevant than others. “Our view is that there is no one single key question central to the e-Readiness domain, but that the relevant questions as well as the strategy for producing answers are driven by who is asking that question, why, and for what purposes. To illustrate, for businesses, with primary interest in expansion into new markets, the question might be the nature of ‘fit’ between the business and the relevant context and contents of potential applications, or opportunities. For national governments, whose interest is in effective targeting of investments in IT, the question might be: what are best ways of determining ‘gaps’ and ‘needs’, and strategies for closing the need-gap” (Choucri 2003, 5). As stated above, the devil is in the details: there is more information at the bottom, rather than the top (see Sciadas 2003, Choucri 2003): “In this context, our key propositions are that (i) different countries (or economies) are characterized by different e-Readiness profiles or propensities defined by their individual access and capacity conditions; (ii) given the variety and diversity of characteristics, there may well be a wide range of variables that shape propensities for both access and capacity – with respect to some opportunity; (iii) such propensities enable the pursuit of specific applications within the broad opportunity context that a country may have at any point in time. It is likely that some e-Readiness factors are more informative than others, however, it would be useful to know what factors are critical, for which profiles, why and how. At the same time, e-Readiness profiles are not fixed; they are subject to investments, policy, and a host of contextual socio-economic factors. Given this variability (and flexibility) different countries can and do embark on different pathways toward greater e-Readiness in general or toward e-Readiness targeted toward a specific opportunity. It is fair to ask: Profiles of what, precisely? Pathways from where and to what? And for what type of opportunities?” (Choucri 2003, 7).
12
Thus, consistent to the objectives of our approach, the necessity is that “we must develop a more robust set of ‘rules’ and tools for coupling conditions, content, and context than we have done to date” (Choucri 2003, 16). Also, we believe that due to the complexity of the topic for investigation, work in model construction and empirical validation should develop on two parallel levels, which is what we propose for the InfoMetrics 4C model: a general, aggregated model followed by applied, disaggregated structural/sectoral (business, social/policy) and function/application-based submodels (ecommerce, e-work, e-learning, e-government). Due to space limitations, the model proposed and empirically tested in this paper represents the first stage of the InfoMetrics© work: the general, aggregate level of InfoMetrics. Work is currently in progress for studying the next steps in parallel, specifically the e-learning 4C model, to be presented elsewhere. The issue, again, is about levels and the decision to establish the model at one specific level rather than another, dependent on the particular sector studied. A good approximation of our model is Sciadas’s Infostate model. In fact the way his implied model is represented graphically, it looks very much like a path diagram, as in Figure 9 below.
Figure 9: The ORBICOM diagram of the Infostates model. Source: Sciadas 2004, 10. Our current 4C model is an improvement of what we proposed earlier in a roundtable on measuring the Digital Divide held at University of California, Los Angeles (GSEIS 2002, 5) as the “3C (connectivity, capability, and content” model. There, we acknowledged the need to move beyond an understanding of Information Society (and hence access) issues, solely in terms of connectivity: “A conceptual breakthrough is needed in our public discourse about the digital divide, one that incorporates the complex and often overlapping issues characterizing connectivity, capability, and content. All three dimensions are interrelated […] As such, we cannot realize our investment in connectivity without simultaneously investing in capability and content”. (GSEIS 2002, 8). “Countries’ preparedness to take part in the global information society cannot be evaluated without complementing this category of data with other indicators that capture information about qualitative aspects of countries’ economic, legal and policy framework” (UNCTAD 2003c, 12). This obvious statement has been at the forefront of our model construction process, where we intended to take into account (read include in the model as different factors) all the relevant factors, and study relationships between them. The authors of the present paper recently came across a model that uses a similar, “C” framework (Rao 2003, 12) in what is called the “8 C’s of the digital economy 13
(parameters beginning with the letter C): connectivity, content, community, commerce, culture, capacity, cooperation and capital” (Rao 2003, 1). From an analysis of the cited framework, Rao (2003) does not explain the reasons behind his 8 C’s, nor does he provide an empirical test of his indicators, so that it seems his framework is closer to a taxonomy than to a conceptual model. Moreover, although some of his “C’s” are identical to ours (connectivity, content, and “capacity”, which we called “capability”), upon a comparison of his remaining 5 variables (community, commerce, culture, cooperation, capital) with our “context” factor, it is evident that we had taken a different approach, taking these into account at different levels across all our 4C’s, not the least of which is “context” (our name for what is usually discussed as external, enabling factors). Connectivity “Connectivity is narrowly defined as the physical infrastructure available to a country, as distinct from broader factors determining access (e.g. literacy, cost). It represents the basic “limiting factor” regarding access to and use of ICTs – without the essential physical hardware, ICT use is not possible” (UNCTAD 2003a, 10). We proposed the “Aggregate Index of Household Connectivity” (AIHC) as the aggregate measure at this level (GSEIS 2003, 12). “Until recently, infrastructure had been considered as the main obstacle to improving access to ICTs” (ITU 2003b, 3). “While such per capita measures are convenient and useful for comparing general differences between and within countries, they can be misleading” (ITU 2003b, 6).
Capability There is growing evidence that factors other than infrastructure are important when it comes to access issues (ITU 2003b). For instance, in a study that reported that when controlling for income and infrastructure (the usually acknowledged factors that affect ICT development), “there are significant differences in Internet usage rates for sixteen European countries associated with religious affiliations” and openness of a society (as measured by breadth and qualities of civil liberties available) (Beilock & Dimitrova 2003, 237) thus providing an insight into the fact that “non-economic factors can be critical for the adoption of new technologies” (Beilock & Dimitrova 2003, 239). A key factor of the New Economy has been identified to be intangible capital. Van Ark (2002) distinguishes between human capital (primary, secondary, tertiary formal education and general vs. vocational education) and knowledge capital (R&D, patents, licenses etc., books, libraries, media, organizational capital, marketing, social capital). “Both access and usage of Internet, just as other ICTs, are inextricably linked with individuals’ possession of skills and competencies” (SIBIS 2003, 40). The European Union has established the Digital Literacy Index (COQS-Index, SIBIS 2003, 103), or European Computer Driving Licenses (ECDL) to measure capability. UNCTAD (2003a) reports high correlations of 0.7865 (2001), 0.764 (2000), 0.776 (1999), 0.833 (1998) and 0.686 (1995), between their “Connectivity” factor and their “Access” factor (comprising users, literacy6, call costs and average income), as shown in Table 2 below. 6
Their “Access” factor is comprised of variables that we consider more fit into what we called “Capability”, while others (such as costs, income) should have a place in “Connectivity” rather than “Capability”. This choice can only be tested empirically, findings of which we report later on in this paper.
14
Table 2: Correlations of UNCTAD’s Components of their ICT Development Index. Source: UNCTAD (2003a), 58.
Another argument for the importance of skills (“Capability” with our denomination) in accounting the variance of the ICT development inequalities between countries, Sciadas (2003) reports that “ICT skills moderate somewhat the severe gaps caused by networks” (Sciadas 2003, 36). From our perspective, capability is an important complement to connectivity, representing not only the skill sets people need to maximize once they are connected, but also the capabilities of the information system itself and whether architecture and design support rather than discourage intelligent engagement by the end user (issues of usability are important from this respect). From yet another point of view, literacy must be distinguished from fluency, which more than the rather minimal requirements needed for someone to be considered “ICT literate”, involves “a threshold of skill acquisition from which a user may intuitively and independently adopt new skills and adapt existing ones with relative ease across new technologies” (GSEIS 2003, 5). Also, it is important to keep in mind the relevance of including user skills for content creation (and not only content consumption) as an additional factor under capability. In an earlier proposal, the “Information Literacy Indicator” and the “Teacher Digital Media Pedagogy Indicator” have been proposed as aggregate measures at this level. Work is currently underway to empirically test the proposed indexes. Content Content is a powerful yet often overlooked dimension. The issues that need to be included at this level are related to questions such as: is content accessible, culturally sensitive, community relevant, and language appropriate, and how do these 15
characteristics affect motivation to use technology once connectivity has been achieved? The Children’s Partnership (2000) was among the first studies to propose conceptualizing ICT inequalities in terms of content. Their findings from a series of studies were that lack of local information content, literacy barriers (online content being primarily designed for Internet users who have discretionary money to spend, so that the vast majority of information on the Net is written for an audience that reads at an average or advanced literacy level), language barriers (an estimated 87 percent of documents on the Internet were in English in 2000 yet, at least 32 million Americans speak a language other than English as their primary language), and lack of cultural diversity were the main content-related barriers to ICT development at the bottom of the diffusion rankings. The “People’s Content Index” has been proposed in an earlier attempt (GSEIS 2003, 13) as relevant aggregated-level measure of content. Context In our understanding, Context variables enclose what the World Economic Forum’s NRI measure compiled as “Enabling Factors”: contextual, external, socio-political variables that affect the e-readiness relationships between the core variables. “The fact that a dollar spent on ICT may yield widely varying results in terms of e-readiness underlines the importance of other variables such as market and regulatory factors” (UNCTAD 2003c, 16). From another perspective, having the equipment or networks (Connectivity) is not enough to derive economic benefits. “Other factors, such as the regulatory environment” (Context in our understanding), “the availability of appropriate skills” (Capability), “the ability to change organizational set-ups” (OECD 2003, 10) should all be taken into account when developing an e-metric. Therefore in our understanding, the “Context” subindex (factor) is comprised of external factors that moderate the effect and relationships between the other factors (C’s). Otherwise put, “Context” stands here as the subindex able to directly refer to the long standing question: is the ICT divide a special case of the larger socio-economic divide, or is it a structurally different one? Or, otherwise put, what is the causal relationship between socio-economic development level and ICT development? UNCTAD’s (2003a) version of what we have named “Context” is their “Policy” factor. It is reported there that “the scores of the Policy and Connectivity Indices show a reasonable correlation […] of 0.516 (2001), 0.4297 (2000), 0.430 (1999), 0.426 (1998) and 0.403 (1995)” as shown above in Table 2. However, there are no studies linking context (enabling factors) to capability as a dependent variable, although as reported in numerous studies, ICT investment is associated with high ICT skills (OECD 2003, 69 where a listing of current studies appears). Pointing yet again to a methodological issue discussed above, aggregation may disguise some of the impacts of ICT in sectoral and aggregate analyses that are more evident from a disaggregated (sector by sector) level. This might be the case because the ICT impacts depend on other factors and policy changes, which may differ across industries and sector. For instance, “regulatory reform in specific sectors and specific countries, financial services for example, may already have allowed ICT to strengthen performance, while lack of reform may still hold back productivity changes in other sectors or countries” (OECD 2003, 78). The implication of the preceding discussion is clear: “examining any of these factors in isolation is of limited use” (OECD 2003, 80) so that what we need is a fairly complex, multi-layered theoretically-driven model that includes direction of causality and takes into account all the relevant variables in the same time. This type of model, we 16
propose, can only be methodologically sound by using advanced statistical methods such as Structural Equation Modeling (SEM). Hence, we decided on SEM as our method for developing and testing the InfoMetrics 4C model. Methods Secondary data provided by World Bank’s World Development Indicators Database 2003 and ITU World Telecommunications Indicators 2002 were matched and compiled into a dataset that was further analyzed. For ease of data matching and availability reasons, all data were analyzed at the country level and above. The final dataset contained variables related to population demographics (i.e., population, rural & urban socio-economic indicators, literacy level, level of completion for primary and secondary education, etc.), economic indicators such as GDP, unemployment, wages, etc., media outlets (i.e., number of daily newspapers, radio stations, etc.), and ICT- related variables (either economic variables such as tariffs, ICT revenues and investments, or use variables such as number of cell phone subscriptions, cable subscriptions, telephone main lines, etc.). A number of 237 cases (countries and regions) were included in the analysis. The complete list of included variables is shown in the appendix to this paper. Data analysis The compiled dataset included most of the connectivity and capability variables, and just a few of the content and context variables. However, even for the connectivity and capability sub-indices, the number of cases provided in the dataset was too low compared to the number of parameters to be estimated. To solve this problem, a two-step approach was taken: first, we conducted a Confirmatory Factor Analysis for all sub-indices when required. Second, we formed simple sum composites for the sub-indices found, calculated the composite reliabilities, and introduced them into the simplified model, thus reducing the number of unknown parameters. Therefore, if the initial model was similar to the one shown in Figure 10 below: Var Var
Sub-index 1
Var Index Var Sub-index 2 Var Var Figure 10: Path diagram of initial InfoMetrics conceptual model for each C the working model was: Composite
Sub-index Index
Composite
Sub-index 17
Figure 11: Path diagram of InfoMetrics working model (used in analysis) The formula used for calculating reliability of the composite was 2
⎞ ⎛ n ⎜ ∑ λi1 ⎟ ϕ11 ⎝ i =1 ⎠ ρ cc = 2 n ⎞ ⎞ ⎛ n ⎛ ⎜ ∑ λi1 ⎟ ϕ11 + ⎜ ∑ v(δ i ) ⎟ ⎠ ⎝ i =1 ⎝ i =1 ⎠
To make the working model (a single indicator – single latent variable model) theoretically identified, we set the path from latent variables (i.e., sub-index) to the composite variable to 1, and also the variance of the composite error to v(δ c ) = v(c)(1 − ρ cc ) . Although this method was somewhat helpful in testing the validity of each index separately (connectivity, capability, content, and context), as well as their predictive power regarding e-usage, it still didn’t work for the overall time-series model. The number of observations (N = 237) was too low and didn’t allow us to test the overall model. More data are needed in order to do so. Thus, we only looked at individual indices, and their effects on Internet usage. The data used were the ones collected for 2001, unless otherwise noted. Results To test the validity of the connectivity index, as well as its effect on Internet usage, we first conducted confirmatory factor analyses (CFA) on availability and spending, formed composites for the two, and used the composites in further analyses. The indicators for availability were total number of cable, telephone, and cell phone subscriptions, number of personal computers available at home, total number of telephone main lines, as well as number of telephone main lines per 1000 inhabitants. The results show that all of the indicators loaded significantly on a single latent variable (availability), and that the sub-index model for availability has an overall good fit (ChiSquare = 6.91, df = 5, p = 0.22729, RMSEA = 0.037). Consequently, the composite was maintained and used in further analyses (with the error variance for the composite calculated at 11681.6797). The indicators available in the data set for the “spending” sub-index were cell phone monthly subscription and connection fees, dial-up PSTN monthly subscription and charge, dial-up ISP monthly subscription and charge, and overall dial-up charge per hour of use. All the indicators significantly loaded on only one factor, and the analysis showed a good model fit for the one-factor “spending” model (Chi-Square = 3.30, df = 3, p = 0.34739, RMSEA = 0.021). The “spending” composite was used in further analysis (with the error variance for the composite calculated at 14867.8877). There were no variables available for the usage and quality of connection subindices. Even so, the CONNECTIVITY index could be reliably formed using only the availability and spending sub-indices, along with a newly found variable, the percentage of phone line capacity used. The model showed an acceptable fit (Chi-Square=5.019, df=2, P-value=0.081, RMSEA=0.081, NFI = .99, CFI = .99).
18
0,
d3 1 0
0, 14867.8876 1
d1
1
spend
0, 11681.6797 1
d2
avail95
mlicap00
spendc
1 0,
0 1
availc
connectiv
Figure 12: Path diagram for the Connectivity Index However, when adding Internet usage as a predicted variable to the model, the fit increased visibly (Chi-square = 5.206, df = 4, p = .267, NFI = .993, IFI = .998, CFI = .99, RMSEA = .036). 0 ,
d 3
d 1 d 2
0, 1 14867.8876 spen
0 1
d
0, 1 11681.6797 avail9
5
1
spend c 0 avail c
1
mlicap0 0 1 0 ,
connecti v
netusi0 1 1
0 , Figure 13: Path diagram for the Connectivity Index, after Internet Usage(“netusi01”) was added 19
But as we predicted, connectivity is not all around that important; even though the model had a better fit when predicted usage was included, the actual regression weight of connectivity on Internet usage was of only .02 (standardized estimate), and it was not statistically significant. In terms of capability, only the previous experience with computers had enough data available for forming a sub-index. The number of personal computers available for home use in 1995, 1996, 1997, 1998, and 2000 (previous years) were used as a proxy to indicate experience with computers. (However, personal evaluations of experience and confidence in using computers and Internet might be better predictors of usage). All the indicators were shown to load significantly on only one latent factor, and the CFA model for experience with computers also showed a good fit (Chi-Square=0.69, df=2, Pvalue=0.70893, RMSEA=0.000). The composite was further used (with the error variance for the composite calculated at 473377.8727). As other indicators were not available to form a CAPABILITY index, only financial capability (average wages in 2001 per country) and level of completion for secondary education were added to the previous experience with computers to form an overall index. The model showed a very good overall fit (chi-square = .858, df = 1, p = .358, NFI = .995, CFI = 1.000, RMSEA = 0.000). 0 0, 473377.8727 1 1
experpc
d1
expercomp
0, 1
d2 0,
d3
0,
wage01
1
capabil 1
scnded00
Figure 14: Path analysis of the Capability Index When Internet usage was also added to the model as a predicted variable, the overall good fit of the model was still maintained (chi-square = 3.843, df = 3, p=.279, NFI = .987, CFI = .997, RMSEA = .034).
20
0 0, 473377.8727 1
1
experpc
d1
expercomp
0, 1
d2 0,
d3
0,
wage01
1
capabil 1
scnded00
netusi01 1
0,
Figure 15: Path diagram for the Connectivity Index, after Internet Usage(“netusi01”) was added Moreover, the standardized regression weight of capability on Internet usage was .748. Some indicators of content were also available, such as the overall number of secure servers reported in 2001, total number of Internet hosts and number of hosts per 100 inhabitants in 2001, as well as the number of Internet hosts, overall and per 100 inhabitants, added in 2001 compared to 2000. Although these indicators seem to reliably form a single overall index (chi-square = .357, df = 3, p = .949, NFI = 1.000, CFI = 1.000, RMSEA = 0.000), they are not enough in predicting Internet usage. Additional variables such as number of patents rewarded annually, number of copyrighted online works created annually, etc. are needed in order to form a strong predictor of Internet usage. We intend to expand this model in future research and collect more data in order to test the overall model. 0, 1
d1
secsrv01
0, 1
d2
0, 1
d3
0, 1
d4
0, 1
d5
1
hostt01 hosinh01
0,
content
adhost01 adhosp01
Figure 16: Path diagram for the Content Index 21
In terms of context, although several sub-indices could be reliably compiled, such as media outlets (number of daily newspapers, radios, and TVs; chi-square = .771, df = 2, p = .856, NFI = .998 IFI = 1.000, RMSEA = 0.000), ICT (teledensity, telecom revenues & investments; chi-square = 0.180, df = 1, p = 0.671, NFI = 1.000, CFI = 1.000, RMSEA = 0.000), demographics (total population in 2001, population density, rural vs. urban population in percent of total population; chi-square = 1.718, df = 5, p = .887, NFI = .999, CFI = 1.000, RMSEA = 0.000), not enough data were available in order to form an overall CONTEXT index. Literacy level, GDP variables, and additional demographics are needed to test an overall model for CONTEXT. Although some of the needed variables were actually in the World Bank and ITU databases, not enough data points were present in the data set, so that the respective covariance matrices could not be computed.
Discussion Based on our results, the idea that connectivity or infrastructure is not the most important predictor of ICT development and there are other factors that count (GSEIS 2003) is now empirically tested: capability seems to be a better predictor of actual usage than availability – a finding that suggest surpassing the strict infrastructure-based understanding of universal service and universal access. One not only needs the infrastructure available, but also the personal skills in order to use ICT. Of course, this can change for different poor regions where availability is still a problem. But overall, it seems that connectivity is not the best predictor of usage. The findings should be taken with a limitation: the available data that we used are actually gathered from developed and some developing countries, and this might skew the distributions since it is well known that the lack of data for the less and least developed countries are a source of systematic variation thus skewness, which could explain the low coefficient that we have obtained and reported above. Therefore, in individual models the capability index seems to be a better predictor of Internet usage, explaining roughly 55% of the variance in e-usage. In interpreting these results we have to keep in mind, though, that in an overall model in which capability and connectivity are both included, the regression weights can change significantly. On the other hand, we also have to keep in mind that the countries that are indeed reporting their Internet usage and all the correlated variables are developed and developing (middle) countries, which have already more or less solved some of the access/availability problem. In this case, capability will surely be a better predictor of actual e-usage than connectivity. The findings of the analyses reported in this paper represent a first step towards proposing a new, more theoretically informed and reliable model of measurement of ICT development/e-readiness. It is suggested that without a comprehensive framework that includes both a set of core indicators and a set of domain-specific ones (able to be applied to specific domain areas, such as e-learning, e-commerce, etc.) no explanatory power can be provided to business and policymakers alike. This is no trivial point, as more rigorous modeling often supports better decisions, such as when knowing that opportunity based on variation in one factor of a country’s ICT development standing at one particular point in time causes/leads to correct identification of opportunity stemming from variation (increase or decrease) in another. From this respect, if we know that for instance, 22
increased upskilling of the workforce e-literacy skills leads to increase in spending in a particular area, this could mean the development of a market opportunity for that country that would not otherwise be noted just by looking at descriptive measures unable to reveal direction of relationships. From yet another perspective, if we know what causes poor standing of a country in the ICT development race, we know where to direct the international organizations’ funding and what particular effect funding one factor will have upon another and their overall effects on closing the Digital Divide for an area. Consistent with a few current conceptualizations, the InfoMetrics 4C model can prove to be the direction research should go further. The findings reported here support the factor independence of the 4C’s. However, the reader should take into account several limitations: first, there were several difficulties encountered in the data sources used such as the inexistence of data for several variables which prevented us at this point from testing the full model7; second, the analyses performed did not include the relationships between the different C’s themselves in a full latent variable model, due to the difficulties noted above. This further step is necessary to test whether a single index is to sought, and also the relationships between all the C factors in the model. Further research “The world is still a long way from agreeing upon a common set of information society access indicators with extensive and detailed coverage. In cases where data do exist, they are sometimes unreliable, incomplete, out of date or not internationally comparable” (ITU 2003b, 23). Our attempt to provide a conceptual model of Information Society metrics able to be empirically tested with more reliable data analysis techniques is only a first step. In future research, we plan to expand the data analyzed here and test the remaining layers of the model that were not available due to missing values for some of the variables included. In the future, the InfoMetrics research program developed by the authors of the present paper will perform an SEM test of the overall model, in order to analyze if the idea of aggregating the 4 C’s into a single index is a better option than to leave the model at its 4 C’s that we propose. In the end, much remains to be done to improve quality of measurement and rigor of theoretical models in the field of macro-level Information Society indicators. As explained in the literature review, this can be highly useful from a twofold perspective: as a way of improving the quality of our explanations for why certain countries lag behind others in terms of ICT development, and also as practical (policy and business) tool for decision support when investing in prospective regions, countries, markets. References [1] Applied Statistics Group (2002), State-of-the-art report on current methodologies and practices for composite indicator development, Ispra, Italy: Joint Research Centre, European Commission. [2] Arquette, T. (2001), Assessing the Digital Divide: empirical analysis of a meta-analytic framework for assessing the current state of Information and Communication System development, presented to the International Association of Mass
7
The InfoMetrics project currently tests different data sources in order to provide empirical testing of the full model.
23
Communication Research/ International Communication Association Symposium on the Digital Divide, November 16-17, Austin, TX. [3] Barbet, P. & Coutinet, N. (2001), Measuring the Digital Economy: US and European perspectives, Communications & Strategies, No. 42, 2nd quarter. [4] Beilock, R., Dimitrova, D. (2003), An exploratory model of inter-country Internet diffusion, Telecommunications Policy 27, 237-252. [5] Bell, D. (1976). The Coming of Post-Industrial Society: a venture in social forecasting. New York, NY: Basic Books. [6] Beroggi, G., Taube, V., Levy, M., Swiss Federal Statistical Office (2003), Statistical Indicators, Deliverable 6.1, Cambridge, UK: SEAMATE, retrieved January 2004 from http://www.seamate.net/dwl/seamate_d6_1.pdf [7] Bridges.org (2001a), Spanning the Digital Divide: understanding and tackling the issues, Washington, DC: bridges.org, retrieved January 2004 from http://www.bridges.org [8] Bridges.org (2001b). “Comparison of E-Readiness Assessment Models”, retrieved January 2004 from http://www.bridges.org/ereadiness/report.html. [9] Bruno, L. (2001), ‘Information Society Index 2001 trends and rankings’, Bulletin #W23953, abstract available online at http://www.itresearch.com/ [accessed January 2001]. [10] Castells, M. (1996). The rise of the network society. Cambridge, MA: Blackwell. [11] The Children’s Partnership (2000), Online content for low-income and underserved Americans: the Digital Divide’s new frontier, Santa Monica, CA: The Children’s Partnership. [12] Choucri, N., Maugis, V., Madnick, S., Siegel, M. (2003), Global e-readiness: for what? Cambridge, MA: Center for e-business at MIT, Massachusetts Institute for Technology. [13] Daly, J. (2001), Measuring Impacts of the Internet in the Developing World, iMP Magazine, May, available online at http://www.cisp.org/imp/may_99/daly/05_99daly.htm [accessed April 2001]. [14] Daly, J. (2000), Studying the impacts of the Internet without assuming technological determinism, ASLIB Proceedings, 52 (8), available online at http://www.aslib.co.uk/proceedings/2000/sep/03.html [accessed January 2004]. [15] Doutta, S., Lanvin, B. & Paua, F., (Eds.), (2004), The Global Information Technology Report 2003-2004: readiness for the networked world, New York, NY: Oxford University Press, World Economic Forum & INSEAD. [16] Doutta, S., Lanvin, B. & Paua, F., (Eds.), (2003), The Global Information Technology Report 2002-2003: readiness for the networked world, New York, NY: Oxford University Press, World Economic Forum & INSEAD. [17] Economist Intelligence Unit & IBM (2003), The 2003 E-Readiness rankings, New York, NY: The Economist Intelligence Unit, available online at http://www.ebusinessforum.com/index.asp?layout=rich_story&doc_id=6427&categoryid =&channelid=&search=e%2Dreadiness [accessed January 2004]. [18] Foulger, D. (2001a), Seven bridges over the Global Digital Divide, presented to the International Association of Mass Communication Research/ International Communication Association Symposium on the Digital Divide, November 16-17, Austin, TX. 24
[19] Foulger, D. (2001b), The cliff and the continuum: defining the Digital Divide, presented to the International Association of Mass Communication Research/ International Communication Association Symposium on the Digital Divide, November 16-17, Austin, TX. [20] Grigorovici, D., Schement, J.R., Taylor, R. (in press), Weighing the intangible: towards a theory-based framework for Information Society Indices, Chapter 10 in E. Bohlin, S. Levin, N. Sung & C.-H. Yoon (Eds.), Global Economy and Digital Society, Amsterdam: Elsevier Science. [21] GSEIS (2003), Re-evaluating the Bridge! An expanded framework for crossing the Digital Divide through Connectivity, Capability, and Content, a report on “The Digital Divide’s Multiple Dimensions: Indicators for measuring Success”, roundtable of the Pacific Bell/UCLA Initiative for 21st Literacies, UCLA Graduate School of Education & Information Studies, August 1-2 and 4-5, 2002, Los Angeles, CA: UCLA Graduate School of Education and Information Studies. [22] Human Development Report (2001), Making new technologies work for human development, London: Oxford University Press. [23] Infodev (2001), ICT Infrastructure and E-Readiness Assessment Initiative, Washington, D.C.: World Bank, available online at http://www.infodev.org/ereadiness/ [accessed April 2001]. [24] ISIS (2003), Assessment of IST trends, impacts on growth and outline of scenarios, Deliverable 1, Cambridge, UK: SEAMATE, retrieved January 2004 from http://www.seamate.net/dwl/seamate_d1.pdf [25] ITU (2003a), ITU Digital Access Index: World’s first global ICT ranking; education and affordability key to boosting new technology adoption, press release, Geneva: ITU, available online at http://www.itu.int/newsroom/press_releases/2003/30.html [26] ITU (2003b), World Telecommunications Development Report 2003: Access indicators for the Information Society, Executive Summary, Geneva: International Telecommunications Union. [27] ITU. (2003c). World telecommunications indicators: Chronological time series 1960-2002 [STARS Software Package]. International Telecommunication Union. [28] ITU (2002). World Telecommunications Development Report, Geneva: ITU. [29] Jeskanen-Sundström, H. (2001). ICT Statistics at the New Millennium – Developing Official Statistics – Measuring the Diffusion of ICT and its Impacts, keynote address at the IAOS Satellite Meeting on Statistics for the Information Society, Aug. 3031), Tokyo, Japan. [30] Kuipers, A. (2002). Building blocks for the description of the digital economy, presented at the International Association for Official Statistics conference on “Official Statistics and the New Economy”, August 27-29, London, UK. Available online at http://www.statistics.gov.uk/iaoslondon2002/contributed_papers/IP_Kuipers.asp [accessed December 2003] [31] Machlup, F. (1962). The production and distribution of knowledge in the United States. Princeton, NJ: Princeton University Press. [32] Martin, S. (2003). Is the Digital Divide really closing? A critique of inequality measurement in “A Nation Online”, in IT& Society, 1(4), 1-13. [33] Miles, I., Brady, T., Davies, A., Haddon, L., Jagger, N., Matthews, M., Rush, H., Wyatt, S. (1990), Mapping and measuring the information economy: a report produced for the economic and social research council’s program in information and 25
communication technologies, Library and Information Research Report No. 77, British Library: London. [34] Newhagen, J. (2001), Routes to media access: apprehending Internet content, presented to the International Association of Mass Communication Research/ International Communication Association Symposium on the Digital Divide, November 16-17, Austin, TX. [35] OECD (2003), ICT and economic growth: evidence from OECD countries, industries and firms, Paris: OECD. [36] OECD (2002), Measuring the Information Economy, Paris: OECD. [37] OECD (2001), Understanding the Digital Divide, Paris: OECD. [38] Rao, M. (2003), The nature of the Information Society: a developing world perspective, Visions of the Information Society series, Geneva: ITU. [39] Schaaper, M. (2003), A proposal for a core list of indicators for ICT measurement, presented at the United Nations Conference on Trade and Development Expert Meeting on Measuring E-Commerce as an instrument for the development of the Digital Economy, Geneva, September 8-10: UNCTAD, retrieved January 2004 from http://r0.unctad.org/ecommerce/event_docs/measuring_programme_en.htm [40] Sciadas, G., Ed. (2003), Monitoring the Digital Divide … and Beyond, Montreal, Canada: Orbicom. Available online at www.orbicom.uqam.ca [accessed January 2004]. [41] Sciadas, G. (2002a), Monitoring the Digital Divide, Montreal, Canada: Orbicom. [42] Sciadas, G. (2002b), Unveiling the Digital Divide, Connectedness Series, No. 7, Ottawa: Statistics Canada. [43] Selhofer, H., Husing, T. (2002), The Digital Divide Index – a measure of social inequalities in the adoption of ICT, presented at the 10th European Conference on Information Systems ECIS 2002 - Information Systems and the Future of the Digital Economy, Gdansk, Poland, June 6-8, retrieved January 2004 from http://www.empirica.biz/sibis/publications/articles.htm. [44] Selhofer, H., Mayringer, H. (2001), Benchmarking the Information Society Development in European Countries, Communications & Strategies, No. 43, 3rd quarter. [45] Shifflet, M., & Schement, J. R. (1996), Information indicators: A review of their value in policy studies of the national information infrastructure, presented at the Annual Conference of the International Communication Association, Chicago, IL, May. [46] SIBIS Consortium (2003), New eEurope Indicator Handbook, retrieved January 2004 from http://www.empirica.biz/sibis/handbook/handbook.htm [47] UNCTAD (2003a), Information and Communication Technology Development Indices, New York & Geneva: United Nations. [48] UNCTAD (2003b), Information Society measurements: the case of ebusiness, background paper by the UNCTAD secretariat, presented at the United Nations Conference on Trade and Development Expert Meeting on Measuring E-Commerce as an instrument for the development of the Digital Economy, Geneva, September 8-10: UNCTAD, retrieved January 2004 from http://r0.unctad.org/ecommerce/event_docs/measuring_programme_en.htm [49] UNCTAD (2003c), E-Commerce and development report 2003, New York and Geneva: United Nations.
26
[50] US (2002), A Nation Online: how Americans are expanding their use of the Internet, Washington, DC: National Telecommunications and Information Administration, Dept. of Commerce. [51] Van Bommel, M., Devilee, J., Kuipers, A., Ramprakash, D., Bosch, J., Wolters, T. (2000), Available indicators for the New Economy: a stocktaking paper, Luxembourg: NESIS. [52] Wilson, E., Daly, J., & Griffiths, J-M. (1998), Internet Counts: Measuring the Impacts of the Internet, Washington, DC: National Academy Press. Available online at http://www.bsos.umd.edu/cidcm/wilson/xnasrep2.htm [accessed December 2003]. [53] World Bank. (2004). World development indicators online [online database]. World Bank, available online at http://www.worldbank.org.
27