Document not found! Please try again

R&D networks: an evaluation framework ... - Semantic Scholar

3 downloads 414 Views 528KB Size Report
R&D network, underlines the opportunity of three different perspectives of evaluation ... evaluation from a policy perspective, and he acts as Consultant for the.
Int. J. Technology Management, Vol. 53, No. 1, 2011

R&D networks: an evaluation framework Alessandro Sala* Department of Management, Economics and Industrial Engineering, Politecnico di Milano, Piazza L. da Vinci, 32, Milano 20133, Italy Fax: +39-02-6696-945 E-mail: [email protected] *Corresponding author

Paolo Landoni and Roberto Verganti Department of Management, Economics and Industrial Engineering, Politecnico di Milano, Piazza L. da Vinci, 32, Milano 20133, Italy Fax: +39-02-2399-2720 E-mail: [email protected] E-mail: [email protected] Abstract: R&D networks have been studied and promoted by scholars and policy makers as a way to increase the performance of innovation systems. At the same time, limited attention has been devoted to evaluate their performances. This paper reviews the literature that studied R&D networks from different approaches and suggests a framework to help developing evaluation systems for R&D networks. The framework is based on a network model that highlights the main elements and characteristics of an R&D network, underlines the opportunity of three different perspectives of evaluation and has been applied and tested on three case studies. Keywords: research network; research evaluation; technology transfer; network model; indicators; network performance. Reference to this paper should be made as follows: Sala, A., Landoni, P. and Verganti, R. (2011) ‘R&D networks: an evaluation framework’, Int. J. Technology Management, Vol. 53, No. 1, pp.19–43. Biographical notes: Alessandro Sala is a Contract Professor at the Politecnico di Milano, where he teaches the course General Management. He obtained his PhD in Management, Economics and Industrial Engineering from the same university and he has been a visiting scholar at the Science and Technology Policy Research (SPRU) Centre of the University of Sussex. His main areas of interest are regional innovation system, regional competitiveness and research evaluation from a policy perspective, and he acts as Consultant for the Lombardy Regional Institute for Research. Paolo Landoni is an Assistant Professor at the Politecnico di Milano, where he teaches the course General Management. He received his PhD in Management, Economics and Industrial Engineering from the same university and he has Copyright © 2011 Inderscience Enterprises Ltd.

19

20

A. Sala et al. been a visiting scholar at the Department of Economics and Applied Economics of the Katholieke Universiteit Leuven (KULeuven). He concentrates his research in the areas of innovation management especially considering the perspective of governments, governmental agencies and public organisations. Roberto Verganti is a Professor of Management of Innovation at the Politecnico di Milano, where he also serves as the Director of MaDe in Lab, the Laboratory for Education in Management of Design and Innovation. He is also the Chairman of PROject Science, a consulting institute focusing on strategic innovation, a Visiting Professor of Design Management at the Copenhagen Business School and Adjunct Professor of Design Innovation at the University of Vaasa, Finland. He is a member of the Editorial Board of the Journal of Product Innovation Management and of the Advisory Council of the Design Management Institute.

1

Introduction

In the last decades, many R&D networks have been established and networking activities have been studied and promoted by scholars and policy makers as a fundamental mean to arise innovation systems competitiveness (Georghiou and Roessner, 2000). The rise of interactions and of more structured and intense collaborations, such as networks, consortia and agreements between research groups of different universities, firms and other organisations (government institutions, hospitals, foundations, etc.) is strictly linked with changes currently undergoing in innovation systems. First of all, scientific research and technological developments have become so complex and multidisciplinary that the skills of single research organisations or research groups are often not enough to reach the desired target (Larédo, 1998). Secondly, research centres and universities are facing a reduction in public resources and are more willing to perform technology transfer activities. Finally, firms are changing the way they develop new ideas and products, increasingly accessing to competencies and skill of external actors (Chesbrough, 2003; Laursen and Salter, 2004; Campbell and Guttel, 2005), also thanks to the advance in the information and communication technologies (ICTs). For these reasons, the number of R&D networks, the number of participants and the expectations in terms of results have increased in the last years. Recently, also, attention to the evaluation of R&D activities in general has increased. This is due in particular to their acknowledged importance for socio-economic development and to the need of understanding how useful and profitable are the public and private investments in these activities. In the case of R&D activities carried out in networks, evaluation is significantly more difficult since it has to take into account coordinated activities and multiple players’ objectives, tasks and resources. Many scholars and practitioners, as highlighted in the following section, focused on specific aspects of R&D networks, but as far as we are aware, models to evaluate them in a comprehensive way are lacking. This paper aims at introducing a framework which could help to develop evaluation systems for R&D networks and could help to present the results of these evaluations to sponsors, founders and participants. In order to achieve these goals, our research was directed at understanding how R&D networks work, which are the processes and the

R&D networks

21

stakeholders involved and which are the objectives and the information needed by these stakeholders. In terms of methodology, the study was based in a first phase on a literature review and an action research process (Lewin, 1947; Susman and Evered, 1978; Robinson, 1993) in order to individuate the relevant variables and characteristics of R&D networks and to the develop our framework. In a second phase, the framework was further developed and tested for generalisability on two additional case studies. In the first phase, we benefited from involvement in the starting-up phase of an R&D network in the biomedical sector [Health Innovation Network Technology (HINT), the first case described in Section 4]. We were involved for two years in the study of the coordination mechanisms, the governance and, in particular, the design of the evaluation system of this network. The action research methodology, coupled with an extensive analysis of the literature, allowed us to obtain an in depth knowledge of the characteristics, the needs and the problems of this network and to cope with the complexity of the evaluation systems (Gummesson, 2000). In the second phase, the evaluation framework developed was tested with two more networks and further refined in terms of variables, relationships and indicators. In order to increase the generalisability of the framework, we chose R&D networks having different features and carrying out a broad range of activities related to scientific research and technology development (from basic research to education and mutual learning). The differences among the cases proved that evaluation is significantly specific to different contexts and should be related to R&D network characteristics, as it will be discussed in the next sections. The methodology followed, and in particular, the use of case studies, is coherent with the theory building nature of our work (Yin, 1994). In Section 2, the most relevant contributions about networking among R&D organisations and about evaluation applied to R&D networks is presented. Section 3 introduces a framework that helps defining evaluation systems tailored on the characteristics of the R&D network to be analysed. In Section 4, application of the framework to three case studies is presented. Finally, concluding remarks are provided in Section 5.

2

State-of-the-art

2.1 R&D networks Among the variety of network definitions proposed (for instance, Ojasalo, 2004; Achrol and Kotler, 1999; Hastings, 1995), in this work we refer to the one suggested by Tijssen (1998), which is, in our opinion, the most comprehensive one and that can be applied to networks with quite different purposes. According to Tijssen (1998), a network is: “An evolving mutual dependency system based on resource relationships in which their systemic character is the outcome of interactions, processes, procedures and institutionalization. Activities within such a network involve the creation, combination, exchange, transformation, absorption and exploitation of resources (tangibles and intangibles) within a wide range of formal and informal relationships.”

22

A. Sala et al.

Within this definition, it is possible to highlight several network types that can be grouped in five major classes (Fischer, 1998): •

supplier networks, devoted to share design rules and arrangements



customer networks that aim at accessing marketing channels and distributors



producer networks whose purpose is to enlarge production capacity in order to cover a wider products portfolio



R&D cooperation networks to gain rapidly access to new scientific competencies



technology cooperation networks, dedicated to acquire some degree of knowledge on a new technology.

It is useful to divide these classes considering networks that lie directly in what may be called the firm’s vertical chain of production and sale, including suppliers, buyers, and the firm’s own manufacturing operations, versus those that do not (competitors, consultants/contract R&D firms and joint or cooperative ventures). The first three classes belong to vertical networks, whilst horizontal networks take into account the last two ones. Current network literature has mainly focused on relationships with suppliers, customers and distributors (vertical networks), less attention has been devoted to the firm’s relationship with its partners/competitors (horizontal networks) (Chetty and Wilson, 2003). This paper focuses on R&D cooperation networks which can include university departments, national research organisations, technology transfer offices (TTOs), firms’ research laboratories and firms. In the last decades, many of these R&D networks have been established (Wagner and Leydesdorff, 2005) due to the growing complexity and multidisciplinarity of research and technological development, which imply more competition on international technology markets, accelerated transition to knowledge markets, and the need to share increasing research risks and costs (Larédo, 1998). Interaction among different scientific fields has become more intense and traditional organisations based on vertical disciplines face increasing difficulties in contributing to the evolution of science and technology all alone (see, for example, the rise of nanotechnology, biotechnology, bioinformatics and so on). These changes signal ‘the decline of technical self-sufficiency’ (Fusfeld, 1995) and force firms and single research organisations to access externally-generated knowledge (Chesbrough, 2003; Laursen and Salter, 2004; Edquist, 2005). Some empirical research has been undertaken on ‘distributed innovation processes’ (Coombs et al., 2003), ‘networks of innovators’ (Powell and Grodal, 2005) and ‘open innovation’ (Chesbrough, 2003). Scholars and policy makers promoted R&D networks as a fundamental mean to arise innovation systems competitiveness (Georghiou and Roessner, 2000; Malerba, 2004). At the same time, there has been the advent of several novel forms of collaboration between the industry and Public Research Organisations (PROs), most notably multiparty collaborations and strategic research alliances (Webster, 1994). Together, these industry-PRO linkage activities have been the subject of intense policy interest, as governments – at local, national and supranational levels – have focused on the ‘commercialisation’ of public research as an important means of encouraging innovation and growth in the technology-based industries (Faulkner and Senker, 1995).

R&D networks

23

Scholars identified several kinds of new skills/competencies/resources exchange that differ in terms of structure of control (Cagliano et al., 2000; Robertson and Langlois, 1995), typology and number of partners involved (Gomes-Casseres, 1997), density of relationship (Cagliano et al., 2000), time horizon (Snow et al., 1992) and geographic proximity (Breschi and Lissoni, 2001). Among the variety of ways to acquire new knowledge, networking stays in an intermediate position (Table 1), it involves more than two partners and allows to have a very flexible structure of control, that can be a key factor when operating in high turbulent environments both for long-term-oriented collaborations and for short-term ones (Hastings, 1995; Thorelli, 1986). On one hand, through networking, an organisation can access complementary competencies in a more reversible way than consortium, informal/formal agreement, joint venture and equity operations. On the other hand, it allows to access even tacit knowledge (Nonaka and Takeuchi, 1995) which is embodied in individuals (Almeida et al., 2002) and flows from an organisation to another one more easily than through other forms of collaboration such as licensing, contracting or outsourcing. Table 1

Technological collaborations Hierarchic logic

Minority equity

Agreement formalisation

Structure of control

Time horizon

Relationship

Equity

On activities

Long-term

On milestones

Non-equity

On results

Short-term

On results

Joint venture Consortium Networking Agreement Licensing Contracting Outsourcing

Source: Adapted from Cagliano et al. (2000)

It is important to note that R&D networks can be further divided into two categories: the first one is represented by ‘defined’ networks, where the number of partners and their identity is known and established by formal or informal agreements (e.g., associations, temporary consortia with long-term projects, etc.); the second one is represented by ‘not-defined’ networks, whose boundaries are blurred and where number of participants is not known in advance or change rapidly over time (e.g., industrial or technological clusters – see, for instance, Carayannis and Campbell, 2006). In our study, we focus on ‘defined’ R&D networks, that is on a clearly defined form of R&D collaboration where the number of partners is fixed for the period considered during the evaluation process (even if it can change over time).

2.2 R&D evaluation In the last years, many authors highlighted how science and technology have faced deep changes and how global competition and pace of technological improvements have

24

A. Sala et al.

increased (Lawrence and Lorsch, 1965; Smith and Reinertsen, 1995; Gerritsma and Omta, 1999; Lint and Pennings, 1999). Many authors underline that competition to gather public funding is higher and research institutes are requested much more transparency on their usage (Fontana et al., 2003; Georghiou, 2001). Costs and risks of innovation activities have arisen and the need to plan and to assess the value of this activities is acquainted (Nobelius, 1999). In the last decades, scholars have devoted significant efforts to deepen our knowledge about R&D activities evaluation and its implication on R&D management and policy making. Many works focus on measurement of inputs and outputs as proxies of R&D efforts; less attention has been devoted to assess performances of R&D activities. Inputs are traditionally measured through number of researchers or payroll personnel (Coccia, 2001) or the amount of financed research projects (Verganti et al., 2004), while outputs are mainly related to publications and related indicators (Persson et al., 2004; Van Leeuwen et al., 2003; Brusoni and Geuna, 2003; Moed, 2000) and patents (Baldini et al., 2006). Only few authors pay attention to technological transfer activities (Coccia, 2001; Verganti et al., 2004; Karlsson et al., 2004) and to training and teaching activities (Coccia, 2001). The debate is still open about typologies of indicators (e.g., quantitative vs. qualitative indicators) (Moser, 1985; Brown and Svenson, 1998) and the amount of indicators (a wide scoreboard vs. a single-item measure) to be collected (Fu, 2005; Karlsson et al., 2004; Hagedoorn and Cloodt, 2003). When it comes to R&D networks, problems concerning evaluation arise since multiple players, objectives and resources need to be taken into account. Within this field, the majority of works focus on the reasons or circumstances that lead to collaborate (e.g., to obtain a critical mass, to look for new markets, to step up innovation processes, to access complementary assets, to take advantage of technology spillover) and on the selection of partners. Evidences show that networking strategy in scientific research and technology development is linked to shared resources (Miotti and Sachwald, 2003; Chetty and Wilson, 2003; Singh, 2004), to organisation’s ability to learn (Steensma, 1996; Debackere et al., 1996; Eisenhardt and Martin, 2000) or to the nature of the technology (Boisot, 1986; Roberts and Berry, 1985). Other studies, based on social network analysis and organisation network theories such as density and centralisation, deal with the structure of R&D networks (Ejermo and Karlsson, 2006; Cantner and Graf, 2006) and with coordination of their activities. A main issue is the structure of control and formalisation of shared assets (be they tangible or intangible) previously belonging to different organisations (Miotti and Sachwald, 2003; Doz and Hamel, 1998; Van Aken et al., 1998). Only few works focus on the analysis of impact of networking on R&D performances (Luukkonen, 1998) and on network efficiency (Poh et al., 2001). Some of them evidence that collaborative research produces more qualitative research (Beaver, 2004), others using social network indicators such as density and network heterogeneity found that well-connected R&D networks achieve a greater level of performance and lower levels of variability of quality than less well-connected ones (Reagans and Zuckeman, 1999; Rigby and Edler, 2005). Anyway, it must be pointed out that the above insights are based only on limited aspects of research activities, mainly measured by publications or bibliometric indicators (Narin et al., 1997) and some authors warn against adverse effects that could arise due to different techniques developed (Poh et al., 2001) and the variety of scientific fields (Luukkonen, 2005). Finally, some scholars suggest the development of new

R&D networks

25

approaches to measuring the pay-offs that focus on linkages between knowledge producers and users and on the characteristics of research networks (Luukkonen, 1998). The evaluation of R&D activities and networks has been explored from different points of view in the literature. However, we could not find integrated frameworks which take into account all these contributions and methods to develop evaluation systems that could face the problem of high complexity and wide heterogeneity of situations. Our work benefits from these previous contributions based on different theories and it makes the attempt to integrate them into a unique framework.

3

The evaluation framework

This section presents the evaluation framework designed on the basis of the literature review and the empirical investigation. The key aspect of the framework is an evaluation matrix (Section 3.3) which is composed of two overlapping parts: R&D network elements (Section 3.1) and perspectives of analysis (Section 3.2). R&D network elements come from a network model which is a simplified, but comprehensive representation of the R&D network and of its characteristics. Perspectives of analyses summarise evaluation objectives and points of view belonging to the three main categories of stakeholders. Overlaps of the above two parts in the evaluation matrix evidence areas of investigation (and indicators) that have to be taken into account when designing a specific evaluation system, that is a specific scoreboard of indicators. The evaluation process completes the framework as explained in Section 3.4.

3.1 R&D network model and evaluation dimensions In order to define specific evaluation systems, a model of R&D network, that is the object of the evaluation, has to be defined. The first step in defining an evaluation system and its indicators’ scoreboard, as a matter of fact, is the identification of the elements of the R&D network, i.e., the characteristics allowing to distinguish an R&D network from another. Combining contributions from resource-based theory, social network theory and evaluation theory, we propose a simple network model composed of five elements: outputs, inputs, organisation, activities and environment (Figure 1). The R&D network elements identified in the model constitute those aspects that should be monitored when approaching an R&D network evaluation. Among the above five elements, the first four consider aspects that are endogenous, i.e., they can be addressed to partners decisions and behaviours. They are based on the traditional distinction made in the literature between inputs, process (organisation, activities) and outputs. Aspects considered in the environment element affect R&D network behaviour and performances, but they are exogenous so they serve as control variables in the evaluation process. The following paragraphs present a description of the five elements on the basis of the literature review and our empirical research; for each of them, we identify the aspects to be evaluated (see, for instance, Roper et al., 2004), that we labelled ‘evaluation dimensions’. Each evaluation dimension can be further deepened in terms of variables, and finally, in terms of specific, measurable indicators. We discuss the relevance

26

A. Sala et al.

of the evaluation dimensions and suggest examples of variables and indicators that can be used for each R&D network element depending on the specific network considered (Table 2). Figure 1

Network model

3.1.1 Outputs This first element defines typology and expected level of outputs to be pursued by joining an R&D network so that it is possible to focus the activities, to shape resources and to assess achievements. The first goal of an R&D network is to perform research in general, which could be split into basic research, applied research and experimental development (OECD, 2002). Due to the different nature of these activities, different types of output are expected and different indicators can be used. For instance, bibliometric indicators can be used for basic research, patents for applied research, and new materials, products, devices or processes for experimental development. One should consider that scientific and technological fields add specificities to the overall model and to the obtainable results (Boisot, 1986; Roberts and Berry, 1985) and they allow to identify specific outputs such as new treatment, new clinical procedures, new software ad and so on. R&D networks can perform other activities with a scientific and technological basis that can lead to different outputs, namely technology transfer activities (Mansfield et al., 1982; Autio and Laamanen, 1995; Amesse and Cohendet, 2001) or education and training activities (Verganti et al., 2004; Brown and Svenson, 1998; Oxman, 1992). For instance, technology transfer activities could result in income from collaboration with other organisations/enterprises, while number of courses, degree of participation, students’ evaluations, etc., can be considered outputs of training activities.

R&D networks

27

Outputs and their forecasted results should be established in advance by partners (Bartezzaghi et al., 1999) and they are strictly related to the time horizon (Snow et al., 1992).

3.1.2 Inputs They are resources (tangible or intangible) that an R&D network uses to pursue its goals. Inputs can be detailed in human resources, financial resources and tangible assets (Kingsley et al., 1996; Ernst and Kim, 2002). Networking can have an internal focus, if partners want to share existing complementary resources among them (i.e., competencies, laboratories, technologies, …) or an external one, if the aim is to gain a critical mass to achieve new resources from environment (funding, recruitment, …) (Azzone, 2000). Variables to be considered to evaluate this element should pay attention both to efficacy and efficiency of internal processes in the usage of resources, both to the ability in gaining new resources in addition to those ones shared among partners. With respect to the first point, the evaluation systems can take into account the number of researchers involved in common activities, their level of co-working and interaction, their access to shared infrastructures and laboratories, and finally, the shared expenses and investments. With reference to the second aspect, indicators can concern new personnel hired or collaborating with network researchers, new technologies/equipments and new funding or incomes.

3.1.3 Organisation Performances of R&D activities carried out by a network depend also on its institutional and organisational setting. Characteristics of partners represent the first variable to take into account in terms of number and heterogeneity (Gomes-Casseres, 1994). For instance, university departments could be less flexible due to academic restriction (teaching, contractual restrictions, etc.) and could have a more general orientation, while private institutions tend to respond to specific needs. Size of the participants could affect relationships within and outside the network, for instance, large research organisation could find it difficult to respond to local SME needs (Blind and Grupp, 1999). Localisation and proximity is another factor that influence and have an impact on research activities, coordination and quality of outputs (Porter, 1998; Breschi and Lissoni, 2001). One should consider also the degree of formalisation of agreements (Grandori and Soda, 1995; Van Aken and Weggeman, 2000; etc.) as well as organisational structure (Dubini and Aldrich, 1991; Thorelli, 1986), which are key elements for dealing with shared resources and manage the flow of R&D funds (OECD, 2002). In particular, during our field work, we could identify three main R&D network organisation structures. In the first one, that can be named ‘net structure’, coordination and control is based on mutual adaptation; in the second one, that can be named ‘star structure’, there is one partner that has the role of coordinator and manage interaction among all the partners; in the third one, that can be named ‘hybrid structure’, all the partners have the same importance, but interactions are managed and controlled by a central unit constituted ad hoc (e.g., steering committees). Identifying the proper organisation structure is important to effectively address responsibilities to each partner depending on the role and duties it has in the R&D network (Azzone, 2000).

28

A. Sala et al.

On more aspect is about coordination of shared activities and resources and motivation of partners, which is linked to the control and management systems, namely accounting system, ICT system and incentives system (Azzone, 2000). From this point of view, the evaluation system must interact with all these previously existing systems supporting the governance and the management of the R&D network (Gonda and Kakizaki, 1995). Indicators related to this element monitor interaction among the partners and between the partners and external actors, for instance, considering number of contacts (Fu, 2005), meetings, the behaviour of coordinator and the number of communication from and to them, the performance of an intranet that allows to share documents, comments, requests, etc. This area of investigation shows to the partners the level of interaction and allows to identify bottleneck in communication or low involvement of some partner: it can thus have strong motivational effects in terms of interaction and cohesion.

3.1.4 Activities This element is mainly concerned with the project management, i.e., activities and linkages among partners. It considers the work breakdown structure (WBS) and task responsibilities given to each partner, according to scheduling defined to achieve targets. Furthermore, the number of projects undertaken by an R&D network must be taken into account since project management differs significantly between a single-project case and a multiple-projects one because complexity arises and difficulties in allocating resources to projects must be considered (De Maio et al., 1994). Indicators about activities are based on analysis of cost and scheduling variance between work scheduled and work performed in order to verify if problems have occurred and to understand their causes. Furthermore, project outputs and their quality must be assessed in order to understand the coherence and degree of intermediate outputs and work in progress activities with the overall goal of the R&D network.

3.1.5 Environment This element completes the model considering the external factors that surround and influence the R&D network. Stakeholders (government, financers and enterprises) that operate in the same scientific field could be interested in outputs or in joining the network. Scientific field state-of-the-art and technological trends affect network characteristics in terms of results achievable; interdisciplinary fields need a larger number of partners than disciplinary fields; some technologies could provide general improvements across several sectors, while other are more sector specific (Bresnahan and Trajtenberg, 1995). Localisation and socio-economic context impact on market needs identification and new ideas generation: existing clustering or agglomeration advantages could attract R&D investments (Asheim and Coenen, 2004; Saxenian, 1996). National rules and bureaucracy facilitate or impede R&D network setup and evolution (e.g., scientists’ mobility, intellectual property right (IPR) rules, regulations and enforcement) (Porter, 1998; Beldebros et al., 2001; Altenburg, 2000). All these elements must be taken into account because they could impact both on the objectives and on the resources of the R&D network and they could provide ex-post explanation to effectiveness and efficacy.

R&D networks Table 2 Category

29

Network elements, evaluation dimensions, variables and indicators Network elements

Endogenous 1 Outputs

Evaluation dimensions Research

Forecasted results

Technology transfer

Time horizon

Training or other specific activities 2 Inputs

Variables

Human resources

Publications/bibliometric indicators

Scientific and technological disciplines

Patent applications

Internal focus

Co-working

External focus

Access to shared infrastructures

Financial resources

New prototypes/artefacts Classes (number and participants)

Co-financing

Infrastructures 3 Organisation Governance structure

Indicators (examples)

Personnel exchange Partners

Organisational Motivation and structure coordination Coordination mechanism Agreement formalisation

Number of partners Size of partners Density Meetings Shared documents Common public events

Incentive system Accounting system ICT system 4 Activities

Budget

Project(s)/WPs duration

Quality

Responsibilities

Scheduling

Cost variance Time variance

Resources allocation Exogenous

5 Environment Stakeholders technological trends Socioeconomic context National rules

Economic and industrial structure

GDP in the region

Political environment

Public R&D investments in the region

Financial environment (e.g., venture capitals)

Overall scientific and technological performance of the region

Legislative framework (e.g., IPR legislation, autonomy of the research centres)

Private R&D investments in the region

Number of new and spin-off firms in the region

30

A. Sala et al.

3.2 The evaluation perspectives of analysis R&D results in benefits both for the organisation that carry out the activities (private returns) and for external actors and socio-economic system (social returns) by means of spillover effects (Roper et al., 2004). The literature identifies several kinds of stakeholders, such as financial organisations, government institutions and the partners themselves, which are interested in better understanding the performances of R&D networks (Malecki, 1981a, 1981b; Lint and Pennings, 1999; Coccia, 2001). Policy makers are interested in understanding how government initiatives affect interaction among players operating in scientific research and technology development (Hayashi, 2003; Georghiou, 2001). Financial institutions, foundations and governments aim at identifying socio-economic benefits generated by networking (Roper et al., 2004), thus, providing justification for supporting R&D networks. Other research institutions could be willing to understand opportunities in joining the R&D network. When considering the R&D network itself or a single partner, evaluation should understand how a single unit impacts on common activities and vice versa (Luukkonen, 1998), that is which are the marginal effects on participants performances. These marginal effects can be grouped in three categories (Georghiou, 1994; Branstetter and Sakakibara, 2002): •

inputs: if the R&D network provides the initial conditions needed for the project to start



behaviours: if participant behaviour differs from the one it would have if developing the project by itself



outputs: if participant embodies new competencies after participating in an R&D network.

These remarks were confirmed during our action research and they suggested us not to consider an R&D network only as a black box, but to evidence interactions among partners and the role of each single partner. The evaluation framework we propose, thus, consists of the following three main perspectives of analysis. •

‘Black box’ perspective The first perspective analyses R&D network interactions with and impacts on the environment without considering its internal processes and performances. Government institutions, financial institutions and other organisations (potential partners or competitors) can be interested in understanding the performance of the R&D network as a whole: its efficiency, its ability in fostering high quality results and its reputation. For instance, funding institutions can be interested in these results in order to decide on future financial supports. Other R&D networks or other organisations that operate in the same scientific-technological field can be interested in joining or in exchanging resources.



‘Interaction’ perspective The second perspective deals with the relationships and the communications among partners, focusing on the sharing of competencies and resources in order to achieve particular scientific or technological objectives. On one hand, it focuses on outputs

R&D networks

31

that are due to an effective collaboration among partners, not considering benefits belonging to an unique partner; on the other hand, it deals with the way partners access and use shared resources, it aims at identifying bottlenecks in the processes and it has a managerial implication on the allocation of resources and scheduling of activities. •

‘Unit’ perspective The last perspective pays attention to each single partner activities. This perspective has two purposes. On one hand, it aims at identifying the contribution given from a partner to the R&D network in achieving its objectives. On the other hand, its purpose is to assess the benefits to the single centre in participating in the R&D network and if it improves the unit performances compared with the stand-alone situation. This perspective is consistent with the aim of considering the marginal effects provided by collaboration on participants performances (Georghiou, 1994). It must pointed out that a partner usually shares only some of its resources (human resources and infrastructures) so evaluation of marginal effect can have a double criterion: comparing performances before and after joining R&D network or comparing performances and behaviour between resources involved in R&D network and those that are not (Branstetter and Sakakibara, 2002).

3.3 Evaluation matrix Evaluation dimensions belonging to each element and perspectives of analyses are both pivotal for evaluating an R&D network and they can be matched together to identify all the aspects to be monitored, namely the evaluation matrix (Figure 2). The matrix has on the rows the evaluation dimensions and on the columns the three perspectives of analysis and so it individuates the areas of investigation. Figure 2

Evaluation matrix

32

A. Sala et al.

Thanks to the literature review and case studies, it is possible to fill in each area of investigation with a wide set of indicators. Among all indicators provided, some of them can be chosen to design a tailor-made evaluation system accordingly to the specific features and importance of each area of investigation for the R&D network examined. The set does leverage significantly on the existing literature and many examples have already been provided in the previous sections, but some more examples can be interesting in order to highlight differences among different perspectives of evaluation (i.e., black box, interaction, unit) in the evaluation matrix. For instance, considering research outputs within the ‘outputs’ element, different bibliometric indicators can be used for each perspective. In the ‘black box’ perspective, the amount of publications realised by all the partners as a whole can be measured in order to show the critical mass gained by the R&D network and the relevance that it has assumed in the scientific community. In the ‘interaction’ perspective, the number of publications in co-authorships between partners can be considered in order to assess the degree of internal collaboration on common research fields. Finally, in the ‘unit’ perspective, the number of publications of each single partner can be measured so that every participant could compare its own performances with other partners, competitors and with itself in previous years in order to understand its performances and benefits in joining the R&D network. Another example can be provided considering the ‘input’ element: on one hand, ‘interaction’ perspectives focuses on existing resources such as personnel involved in collaboration, level of co-working among partners, shared expenses, level of usage of shared laboratories/technologies. On the other hand, the ‘black box’ and ‘unit’ perspectives monitor access to new resources at network level (first perspective) and at single unit level (third perspective). Indicators can evidence resources improvement, such as new personnel, new private/public funding obtained and, finally, investment in new technologies and equipments. Indicators about the ‘organisation’ element focus mainly on the ‘interaction’ perspective and they refer to density of R&D network, interaction through formal and informal meetings, level of communication through mails and phone calls tracking and shared documents. Indicators in the ‘black box’ perspective aim at assessing R&D network’s market approach revealing visibility of the network among stakeholders, hence, they examine for instance its presence in public events, conferences and policy meetings. In the same way, indicators in the ‘unit’ perspective assess if participant reputation arises and if their behaviour differs from stand-alone situation. Finally, when it comes to the ‘activities’ element, indicators in the ‘black box’ perspective report progress of activities and related costs to external stakeholders and they are important especially when the R&D network is promoted and financially supported by external institutions (i.e., local government). The ‘interaction’ perspective in this case is important to monitor risks areas and to communicate work in progress to network coordinator and to each partner: this is pivotal to reschedule activities and to reallocate resources if plans variations occur. Finally, the ‘unit’ perspective can help each single partner to monitor his contribution to common activities and to track his efforts, goals and incomes. Reporting of time and cost variances differ in the three perspectives in terms of frequencies and degree of details (less frequent and detailed for the first perspective, more for the other two perspectives).

R&D networks

33

3.4 Evaluation process As stated before, the evaluation framework is the base upon which to build specific evaluation scoreboards, that is specific evaluation systems. It represents a support to identify the most relevant areas of investigation and the most useful indicators to each R&D network considered. During cases studies, we experimented design of specific evaluation systems for three selected R&D networks and thanks to this on-the-field work we could refine the overall process as described in the following. An ad hoc evaluation system should stem from the features of the investigated R&D network, hence, the first step is to have a complete description of its characteristics. This could be made in particular considering the evaluation dimensions and the variables included in the R&D network model (i.e., objectives, resources involved, kind of organisation, activities, temporal and geographical dimension, etc.). In order to do so, persons responsible for R&D network strategy and partners coordination should be involved. In case of ‘star’ or ‘hybrid’ structures, it is enough to deal with the unit or the committee that acts as pivot; in case of ‘net’ structures, it is worth involving heads of each partner so that all points of view are considered. The starting point of the second step is the extensive description of the network obtained through the interviews. The aim of this stage is to extract a dashboard of indicators that are coherent with the evaluation purposes. It is important to note that not all the possible areas of investigation have the same importance for each R&D network, and moreover, not all the indicators developed in the literature can and have to be used: interviews allow defining relevance of evaluation dimensions and perspectives of analysis so that a specific subset of indicators can be chosen from the evaluation matrix. For example, an R&D network could be more interested in research activities than in technology transfer ones; in such case, the dashboard will include more indicators about scientific publications and fewer indicators about patenting activities or collaboration with firms. Opportunity of selecting a specific subset of indicators is also linked to the necessity of reducing the costs of the evaluation system. Selection of indicators should consider both relevance of areas of investigation both robustness of indicators (depending on cost of collecting data, their significance and their impact on people behaviour) (Azzone, 2000). For these reasons, for key areas, several indicators have to be monitored; whilst for less significant ones, only general and easily-collectible information will be included in the evaluation system. A third step consists in data gathering on the basis of selected indicators. This activity benefits from the existence of accounting or ICT systems that can provide data to calculate indicators or from the presence of other tools (for instance, repositories of scientific publications and patents or logbook of laboratories to observe usage of shared infrastructures). In a fourth step, results should be outlined on three reports, one for each perspective of analysis according to the different stakeholders: external actors (‘black box’ perspective), management of the R&D network (‘interaction’ perspective) and management of each partner (‘unit’ perspective). Reporting have to be included in the design process of the specific evaluation system: it helps in refining the system developed through involvement of the different final users.

34

4

A. Sala et al.

Case studies (implementation)

We applied the process described in the previous paragraph on three R&D networks, in particular, we analysed their characteristics and objectives according to the model proposed, identified the most important evaluation dimensions and variables, and finally, selected indicators accordingly. Brief descriptions of these case studies are provided in the following.

4.1 Health innovation network technology The first R&D network (HINT) is composed by one PRO, three university departments and two healthcare institutes located in the province of Lecco (Italy). The network was born in May 2004 and it aims at improving research in biomedical technologies to subsequently transfer innovations into medical devices. Its objectives are mainly research outputs in bio-medical fields and technologies-related and training about specific competencies. Less important is technology transfer towards enterprises, which is limited to patenting new machine or medical protocol. Partners aim at sharing complementary skills and advanced scientific technologies and to a certain extent at acquiring new ones. HINT has a ‘hybrid structure’ since its activities are coordinated by a steering committee composed by one member of each partner and a representative of the sponsor Univerlecco (Association of Public Bodies and Private Players). Committee supervises progress of work and relationship with external stakeholders, while coordination among partners is based on a defined schedule tracked by means of an intranet/project management system. Collaboration is devoted to a single-project, split in four work packages; control of WPs is based on an accounting system that reports time and costs deviations twice a year. Since HINT mainly focuses its activities on research, the evaluation system contains bibliometric indicators as measurement of output. Furthermore, its ‘hybrid structure’ and close localisation of partners allows researchers to work together so that indicators in second perspective of analysis – partners’ interaction – are the most relevant ones. In particular, a number of co-authorship papers and their quality (namely, impact factor and citations) are calculated both as absolute value and as a ratio of researchers involved in the R&D network. Some indicators are collected for training activities, monitoring hours of training, numbers of participants, presence of external professors and quality of lectures. Technology transfer objectives are the less relevant and only information about patents are included in the dashboard of indicators. Through networking, HINT partners aim at sharing owned infrastructures and human resources so information about number of shared infrastructures and access to them, co-working and number of meetings are taken into account. Since HINT would like to improve its capabilities, it needs to monitor new investments for instrumentation and also its proficiency in obtaining new funding. HINT has a steering committee managing interaction among partners and interacting with stakeholders, but partners usually have horizontal linkages among themselves to develop common activities. Therefore, in this case, it is important to consider both the flows of information from and towards the committee and the level of cohesion among partners (e.g., the level of density of communications). In terms of interaction with external player, it would be viable to consider linkages with other R&D institutions and the frequency of participation to meetings and conferences. As concerns ‘activities’ element, HINT address all of its

R&D networks

35

efforts to a specific goal so that a base project management approach can be followed: the committee should mainly devote attention to milestones accomplishment and resources consumption. To summarise, the specific evaluation system designed for HINT is mainly focused on shared resource utilisation in order to pursue achievements in bio-medical research fields.

4.2 NETwork to VALue university research (NETVAL) NETVAL began its activities in 2002 and in 2006 it counted 47 TTOs of Italian universities; it has a national dimension and a long-term approach. Its main purposes are to perform research on IPRs, licensing, contract research and spin-off firms, and to diffuse the best practices among participants and stakeholders (e.g., policy makers and firms) also organising training courses and conferences. This R&D network is not a legal entity and there are not established financial resources, but each partner has to take care about funding depending on activities they promote or join in. NETVAL mainly focuses on human resources, both belonging to partners and coming from external institutions. Coordination of the network is based on a ‘hybrid structure’, it is managed by a steering committee that monitors activities of each partner (or group of partners) and spread initiatives among them with periodic reports. Within the committee, there is a coordinator that acts on behalf of the committee towards external stakeholders. Research on IPR, licensing and the valorisation of public research in general is mainly conducted by the single participants. At a network level, the focus is in particular on colleting the data from the participants and in analysing them to provide indications regarding development trends, problems, opportunities and comparisons with international experiences. Despite the importance of these research activities, in these years, the main focus of the network has been on knowledge sharing and teaching projects. These projects are designed and implemented by some of the partners and addressed mainly to researchers and administrative personnel. There is not a common accounting system, but each partner that develops a course or a knowledge sharing activity (e.g., conference/seminar) is responsible for its scheduling and budget. The main interest of NETVAL is in sharing experiences and best practices among the partners; this means that outputs element considers those indicators related to training/teaching activities, mostly in the ‘interaction’ perspective. Hours of training activities, number of participants and of partners involved, external lecturers are aspect that should be considered in its evaluation system. NETVAL has no infrastructures/technologies neither financial resources to share, it focuses on personnel of each office and on its participation in training activities (both as lecturer and as audience) in order to further deepen technology transfer issues and increase mutual learning among TTO. For this reason, indicators based on overall number of people, number of meetings and percentage of researchers attending classes have been included in the evaluation system. NETVAL has a ‘hybrid’ organisational structure. The steering committee coordinates and promotes training courses so that in interaction perspective information flows (such as number of communications or reports) are measured, while in unit perspective, level of involvement in courses design or implementation is taken into account. Since the committee has a coordinator for external communications, these ones are considered with indicators such as participation in conferences or attendance at institutional workshops. Since the R&D network has not

36

A. Sala et al.

established projects or work packages neither shared financial resources, ‘activities’ element is little relevant from the point of view of first and second perspectives; costs and scheduling are monitored only for those partners that lead training courses. The evaluation system for NETVAL considers mainly indicators about sharing knowledge on recent findings in technology transfer issues, with an explicit focus on human resources training. When it comes to structure and activities, interactions among partners and collaboration in sharing best practices are stressed as well as the role of committee as pivot.

4.3 COCOON The last R&D network studied involves 23 international partners located in several regions all over Europe that has jointly won a three years EU funded project. COCOON aims at supporting healthcare professional in reducing risk management in their daily practices by building knowledge-driven and dynamically adaptive networked communities within European healthcare systems. Its main objective is applied research and implementation of a technological platform that allows networking of family doctors. Technology transfer and training activities are not goals of collaborations. The aim of the partners was to access to complementary competences and to gain a critical mass to obtain funding from the European Commission. Sharing of human resources and infrastructures is very scarce since partners are distant; work packages have a stage-gate structure with well-defined deliverables and control is based mainly on time and costs on a yearly base. The R&D network has a ‘star structure’ organisation, hence, it is coordinated by one partner; partners are split into three homogeneous groups: a technological one, an end-users one and a multiplayer one. Coordinator manages relationships among groups and among all the partners. As HINT, COCCON pursues research goals, but in this latter case, the focus is on more applied research so that, in addition to indicators on publications, achievements in software and information system performances should be monitored. The localisation of partners and the centralised organisational structures limits face-to-face collaborations, so the interaction perspective of analysis has a more limited role in this specific evaluation system. It is worth analysing only interactions between partners and coordinator and so including in the dashboard number of contacts with coordinator and access to intranet (considering also documents upload and download, update of own scheduling and deliverables). COCOON has several projects and so a multi-projects management approach guarantees a better understanding and control of the activities undertaken. Compared to previous R&D networks, COOCON has quite few variables to assess: outputs, activities and financial resources are clearly defined and addressed to each partner; scheduling and role of coordinator are well-established so that only budget, progress in work packages and output features conformity with agreement should be monitored. These feedbacks are keys both in the ‘black box’ perspective, since they are needed in order to have funding from European Commission and in ‘unit’ perspective because they track own activities and reduce risks of delays.

4.4 Case studies comparison The evaluation framework proposed was derived from the direct involvement in the first R&D network presented (HINT) and the analysis of the literature. We decided to test and

R&D networks

37

refine it on the two other cases to verify and improve its generalisability. For this reason, we searched and selected two R&D networks that differ significantly. The evaluation framework proved useful in the development of all the specific evaluation systems. In particular, the flexibility of the framework in acknowledging their differences can be highlighted comparing the evaluation systems of the three cases. We used information coming from the cases to profile each R&D network, to understand the most relevant evaluation dimensions for each of them and, finally, to extract from the evaluation matrix those indicators that better match the specific evaluation purposes. To summarise and compare the results, we can highlight for each evaluation dimensions its importance for each R&D network in the different perspectives of analysis. In particular, a priority class has been assigned and this is also a proxy of the number of indicators used for that dimension: High priority suggests a large number of indicators needed to monitor several aspects of the dimension; medium priority means that only some indicators are used; low priority implies use of none or only few indicators for specific aspects. Figure 3

Evaluation matrix implementation

Using radar charts (see Figure 3), it is possible to verify in a visual way the differences among the three cases. The analysis of the rows of Figure 3 suggests that within each R&D network, the importance of the evaluation dimensions varies on the basis of the stakeholders considered (perspectives of analyses) and thus suggests the use of ad hoc reporting systems. The analysis of the columns in the same figure shows how, even

38

A. Sala et al.

considering the same perspective of evaluation, different priorities and weights can exist among different R&D networks. This analysis shed light on the effectiveness of the evaluation framework in highlighting differences that exists among the R&D networks, in suggesting different evaluation requirements and in supporting the development of tailored evaluation and reporting systems.

5

Conclusions

In the literature, there are many contributions related to R&D networks, which have been studied from several points of view (for instance, social network analyses, knowledge management theory and resource-based perspective). However, very few of these works deal with evaluation issues. This paper summarises the main contributions given on R&D network and evaluation and provides an evaluation framework to support the development of specific evaluation systems assessing efficacy and efficiency of R&D networks. The first problem faced in designing evaluation frameworks is the heterogeneity of situations as result of different objectives, typologies of partners and collaborations and geographical and temporal dimensions. We summarised all these aspects into a model that can be applied to a wide range of R&D networks. The model highlights five elements: outputs, inputs, organisation, activities and environment. Furthermore, we introduced and discussed the importance of considering in the evaluation process different points of view, in particular, we suggest to consider the perspectives of the three main categories of R&D networks stakeholders: external observers (such as sponsors, public administrations, etc.) interested in understanding the impact of the R&D network as a whole; internal network managers (i.e., steering committee or coordinator partner) that have to coordinate and control activities; and, finally, managers of each partner who have to assess benefits in collaborating in an R&D network. Matching R&D network elements and perspectives of analysis areas of investigation have been identified and outlined in the evaluation matrix. Thanks to the literature in various fields, it has been possible to identify a sample of indicators for each area (e.g., bibliometric indicators, patents indicators, relationship density, budget and scheduling deviation, etc.). The development of specific evaluation systems requires many choices in terms of what to evaluate and how and it must cope with the necessity of limiting the number of indicators in order to contain the costs of data gathering. The framework proposed helps to structure the development process, to identify the most relevant dimensions and variables considering the characteristics of the R&D network and the perspectives of the different stakeholders, and finally, it support the selection of the most useful indicators. We tested this framework on three case studies and with its support, we were able to (more) rapidly define an evaluation scoreboard for each of the three R&D networks involved. Even if more case studies could be useful to assess the generalisability of the evaluation framework, feedbacks from R&D network interviewees revealed that the scoreboards developed using it reflect the most important information needed. Furthermore, the interviews highlighted that the evaluation framework contributed to

R&D networks

39

evidence some aspects not previously considered, but acknowledged as important for network control and management. Given the limited literature on R&D networks evaluation, further research could be useful. Additional efforts could be directed towards the validation of the framework proposed in other cases and to deepen its effectiveness. In particular, a critical point remains the selection of the best indicators for the evaluation systems: among all those ones available in the literature, it is worth investigating which are more appropriate to evaluate R&D networks considering their reliability, simplicity and cost of collecting data. Finally, an interesting avenue for new research regards the life cycle of the R&D networks: informative requirements could be different depending on the stage of development of the R&D network, thus, it could be useful to examine how the evaluation system could evolve and on the basis of which criteria indicators should be added or discarded.

Acknowledgements The authors would like to acknowledge the R&D networks involved in the case studies for their support and in particular the Hint@Lecco project for sponsoring this research through a grant. They are also thankful to Mattutzu Daniele and Podenzani Gabriele, who provided a tireless help during all the research process. The authors gratefully acknowledge the comments of two anonymous referees.

References Achrol, R.S. and Kotler, P. (1999) ‘Marketing in the network economy’, (Special Issue) Journal of Marketing, Vol. 63, pp.146–163. Almeida, P., Song, J. and Grant, R. (2002) ‘Are firms superior to alliances and markets? An empirical test of cross-border knowledge building’, Organization Science, Vol. 13, pp.147–161. Altenburg, T. (2000) ‘Linkages and spillovers between transnational corporations and small and medium-sized enterprises in developing countries: opportunities and policies’, in UNCTAD, TNC-SME, Linkages for Development: Issues-Experiences-Best-Practices, United Nation Publication, New York and Geneva, United Nations, UNCTAD/ITE/TEB/1, pp.3–61. Amesse, F. and Cohendet, P. (2001) ‘Technology transfer revisited from the perspective of the knowledge-based economy’, Research Policy, Vol. 30, pp.1459–1478. Asheim, B.T. and Coenen, L. (2004) ‘The role of regional innovation systems in a globalizing economy; comparing knowledge bases and institutional frameworks of Nordic clusters’, Paper presented at the DRUID Summer Conference 2004 on Industrial DynaElsinore, Denmark, 14–16 June. Autio, E. and Laamanen, T. (1995) ‘Measurement and evaluation of technology transfer: review of technology transfer mechanisms and indicators’, International Journal of Technology Management, Vol. 10, Nos. 7/8, pp.643–664. Azzone, G. (2000) Innovare il Sistema di Controllo di Gestione, Etas Libri, Milano. Baldini, N., et al. (2006) ‘Institutional changes and the commercialization of academic knowledge: a study of Italian universities’ patenting activities between 1965 and 2002’, Research Policy, Vol. 35, pp.518–532. Bartezzaghi, E., Spina, G. and Verganti, R. (1999) Organizzare le PMI per la Crescita, Il Sole 24 ORE, Milano.

40

A. Sala et al.

Beaver, D.D. (2004) ‘Does collaborative research have greater epistemic authority?’, Scientometrics, Vol. 60, No. 3, pp.399–408. Beldebros, R., Capannelli, G. and Kyoji, F. (2001) ‘Backward vertical linkages of foreign manufacturing affiliates: evidente from Japanese multinationals’, in World Development, Vol. 29, No. 1, pp.189–208. Blind, K. and Grupp, H. (1999) ‘Interdependencies between the science and technology infrastructure and innovation activities in German regions: empirical findings and policy consequences’, Research Policy, Vol. 28, No. 5, pp.451–468. Boisot, M. (1986) ‘Markets and hierarchies in cultural perspective’, Organization Studies, Vol. 7, pp.135–158. Branstetter, L.G. and Sakakibara, M. (2002) ‘When do research consortia work well and why? Evidence from Japanese panel data’, The American Economic Review, Vol. 92, No. 1. Breschi, S. and Lissoni, F. (2001) ‘Knowledge spillovers and local innovation systems: a critical survey’, Industrial and Corporate Change, Vol. 10, pp.975–1005. Bresnahan, T.F. and Trajtenberg, M. (1995) ‘General-purpose technologies-engines of growth’, Journal of Econometrics, Vol. 65, No. 1, pp.83–108. Brown, M.G. and Svenson, R.A. (1998) ‘Measuring R&D productivity’, Research Technology Management, Vol. 41, No. 6, pp.30–35. Brusoni, A. and Geuna, A. (2003) ‘An international comparison of sectoral knowledge bases: persistence and integration in the pharmaceutical industry’, Research Policy, Vol. 32, pp.1897–1912. Cagliano, R., Chiesa, V. and Manzini, R. (2000) ‘Differences and similarities in managing technological collaborations in research, development and manufacturing: a case study’, J. Eng. Technol. Manage., Vol. 17, pp.193–224. Campbell, D.F.J. and Guttel, W.H. (2005) ‘Knowledge production of firms: research networks and the ‘scientification’ of business R&D’, International Journal of Technology Management, Vol. 31, pp.152–175. Cantner, U. and Graf, H. (2006) ‘The network of innovators in Jena: an application of social network analysis’, Research Policy, Vol. 35, pp.463–480. Carayannis, E.G. and Campbell, D.F.J. (2006) Knowledge Creation, Diffusion, and Use in Innovation Networks and Knowledge Clusters, Praeger Publishers, Westport. Chesbrough, H. (2003) Open Innovation, Harvard University Press, Cambridge. Chetty, S.K. and Wilson, H.I.M. (2003) ‘Collaborating with competitors to acquire resource’, International Business Review, Vol. 12, pp.61–81. Coccia, M. (2001) ‘A basic model for evaluating R&D performance: theory and application in Italy’, R&D Management, Vol. 31, No. 4, pp.453–464. Coombs, R., Harvey, M. and Tether, B.S. (2003) ‘Analysing distributed processes of provision and innovation’, Industrial and Corporate Change, Vol. 12, No. 6, pp.1125–1155. De Maio, A., Corso, M. and Verganti, R. (1994) Gestire l’innovazione e Innovare la Gestione, Etas Libri, Milano. Debackere, K., Clarysse, B. and Rappa, M. (1996) ‘Dismantling the Ivory Tower: the influence of networks on innovative output in emerging technologies’, Technological Forecasting and Social Change, Vol. 53, pp.139–154. Doz, Y. and Hamel, G. (1998) Alliance Advantage, Harvard Business School Press. Dubini, P. and Aldrich, H. (1991) ‘Personal and extended networks are central to the entrepreneurship process’, Journal of Business Venturing, Vol. 6, pp.305–313. Edquist, C. (2005) Systems of Innovation: Perspectives and Challenges, in Fagerberg, J., Mowery, D. and Nelson, R. (Eds.): The Oxford Handbook of Innovation, Oxford University Press, Oxford. Eisenhardt, K. and Martin, J. (2000) ‘Dynamic capabilities: what are they?’, Strategic Management Journal, Vol. 21, pp.1105–1121.

R&D networks

41

Ejermo, O. and Karlsson, C. (2006) ‘Interregional inventor networks as studied by patent coinventorships’, Research Policy, Vol. 35, pp.412–430. Ernst, D. and Kim, L. (2002) ‘Global production networks, knowledge diffusion, and local capability formation’, Research Policy, Vol. 31, pp.1417–1429. Faulkner, W. and Senker, J. (1995) Knowledge Frontiers: Public Sector Research and Industrial Innovation in Biotechnology, Engineering Ceramics, and Parallel Computing, Clarendon Press, Oxford. Fischer, M. (1998) ‘The new economy and networking’, in Jones, D.C., Steil, B., Litan, R.E., Freeman, R.B. and Brynjolfsson, E. (Eds.): Handbook of Economics in the Information Age, Academic Press. Fontana, R., Geuna, A. and Matt, M. (2003) ‘Firm size and openness: the driving forces of university-industry collaboration’, SPRU Electronic Working Paper, University of Sussex, Brighton, UK, SEWP 103. Fu, Y. (2005) ‘Measuring personal networks with daily contacts: a single item survey question and the contact diary’, Social Networks, Vol. 27, pp.169–186. Fusfeld, H.I. (1995) ‘Industrial research – where it’s been, where it’s going’, Research Technology Management, Vol. 38, pp.52–56. Georghiou, L. (1994) Impact of the Framework Programme on European Industry. EUR 15907 EN, Office for Official Publications of the European Communities, Luxembourg. Georghiou, L. (2001) ‘Evolving frameworks for European collaboration in research and technology’, Research Policy, Vol. 30, pp.891–903. Georghiou, L. and Roessner, D. (2000) ‘Evaluating technology programs: tools and methods’, Research Policy, Vol. 29, pp.657–678. Gerritsma, F. and Omta, S.W.F. (1999) The Content Methodology Facilitating Performance Measurement by Assessing the Complexity of R&D Projects, Faculty of Management and Organization, University of Groningen, Netherlands. Gomes-Casseres, B. (1994) ‘Group versus group: how alliance networks compete’, Harvard Business Review, July–August, pp.62–74. Gomes-Casseres, B. (1997) ‘Alliance strategies of small firms’, Small Business Economic, Vol. 9, pp.33–44. Gonda, K. and Kakizaki, F. (1995) ‘Research, technology and development evaluation: development in Japan’, Scientometrics, Vol. 34, No. 3. pp.375–389. Grandori, A. and Soda, G. (1995) ‘Interfirm networks: antecedents, mechanisms and forms’, Organizations Studies, Vol. 16, No. 2, pp.183–214. Gummesson, E. (2000) Qualitative Methods in Management Research, 2nd ed., Sage, Thousand Oaks, CA. Hagedoorn, J. and Cloodt, M. (2003) ‘Measuring innovative performance: is there an advantage in using multiple indicators?’, Research Policy, Vol. 32, pp.1365–1379. Hastings, C. (1995) ‘Building the culture of organizational networking’, International Journal of Project Management, Vol. 13, pp.259–263. Hayashi, T. (2003) ‘Effect of R&D programmes on the formation of university-industrygovernment networks: comparative analysis of Japanese R&D programmes’, Research Policy, Vol. 32, pp.1421–1442. Karlsson, M., Trygg, L. and Elfstro, B. (2004) ‘Measuring R&D productivity: complementing the picture by focusing on research activities’, Technovation, Vol. 24, pp.179–186. Kingsley, G., Bozeman, B. and Coker, K. (1996) ‘Technology transfer and absorption: an R&D value mapping approach’, Research Policy, Vol. 25, pp.967–995. Larédo, P. (1998) ‘The networks promoted by the framework programme and the questions they raise about its formulation and implementation’, Research Policy, Vol. 27, pp.589–598.

42

A. Sala et al.

Laursen, K. and Salter, A. (2004) ‘Searching low and high: what types of firms use universities as a source of innovation?’, Research Policy, Vol. 33, p.1201–1215. Lawrence, J.W. and Lorsch, P.R. (1965) ‘Organizing for product innovation’, Harvard Business Review. Lewin, K. (1947) ‘Frontiers in group dynamics: II. Channels of group life; social planning and action research’, Human Relations, Vol. 1, No. 2, pp.143–153. Lint, O. and Pennings, E. (1999) ‘Finance and strategy: time-to-wait or time-to-market?’, Long Range Planning, Vol. 32, No. 5, pp.483–493. Luukkonen, T. (1998) ‘The difficulties in assessing the impact of EU framework programmes’, Research Policy, Vol. 27, pp.599–610. Luukkonen, T. (2005) ‘Variability in organisational forms of biotechnology firms’, Research Policy, Vol. 34, pp.555–570. Malecki, E. (1981a) ‘Government funded R&D: some regional economic implications’, The Professional Geographer, Vol. 33, No. 1, pp.72–82. Malecki, E. (1981b) ‘Science technology and regional economic development: review and prospects’, Research Policy, Vol. 10, pp.312–334. Malerba, F. (Ed.) (2004) Sectoral Systems of Innovation. Concepts, Issues and Analyses of Six Major Sectors in Europe, Cambridge University Press, Cambridge. Mansfield, E., Romeo, A., Schwartz, M., Teece, D., Wagner, S. and Brach, P. (1982) Technology Transfer, Productivity and Economic Policy, W.W. Norton & Company, New York, London. Miotti, L. and Sachwald, F. (2003) ‘Co-operative R&D: why and with whom? An integrated framework of analysis’, Research Policy, Vol. 32, pp.1481–1499. Moed, H.F. (2000) ‘Bibliometric indicators reflect publication and management strategies’, Scientometrics, Vol. 47, No. 2, pp.323–346. Moser, M.R. (1985) ‘Measuring performance in R&D settings’, Research Management, Vol. 27–28, No. 5, pp.31–33. Narin, F., Hamilton, K. and Olivastro, D. (1997) ‘The increasing linkage between U.S. technology and public science’, Research Policy, Vol. 197, pp.101–121. Nobelius, D. (1999) Dedicated Versus Dispersed Advanced Engineering Structure – Implications for Internal Technology Development And Transfer, Dept. of Operations Management and Work Organization, Chalmers University of Technology, Gothenburg, Sweden. Nonaka, I. and Takeuchi, H. (1995) The Knowledge Creating Company: How Japanese Companies Creates the Dynamics of Innovation, Oxford University Press, New York. OECD (2002) Proposed Standard Practice for Surveys on Research and Experimental Development – Frascati Manual 1993, OECD, Paris. Ojasalo, J. (2004) ‘Key network management’, Industrial Marketing Management, Vol. 33, pp.195–205. Oxman, J.A. (1992) ‘The global service quality measurement program at American Express Bank’, National Productivity Review, Summer. Persson, O., Glanzel, W. and Danell, R. (2004) ‘Inflationary bibliometric values: the role of scientific collaboration and the need for relative indicators in evaluative studies’, Scientometrics, Vol. 60, No. 3, pp.421–432. Poh, K.L., Ang, B.W. and Bai, F.A. (2001) ‘Comparative analysis of R&D project evaluation methods’, R&D Management, Vol. 31, No. 1, pp.63–75. Porter, M. (1998) ‘Clusters and the new economics of competition’, Harvard Business Review, November–December. Powell, W. and Grodal, S. (2005) ‘Networks of innovators’, in Fagerberg, J., Mowery, D. and Nelson, R. (Eds.): The Oxford Handbook of Innovation, Oxford University Press, Oxford.

R&D networks

43

Reagans, R.E. and Zuckerman, E.W. (1999) ‘Networks, diversity and performance: the social capital of corporate R&D units’, Graduate School of Business, Stanford University, Research Paper No. 1585. Rigby, J. and Edler, J. (2005) ‘Peering inside research networks: some observations on the effect of the intensity of collaboration on the variability of research quality’, Research Policy, Vol. 34, pp.784–794. Roberts, E. and Berry, C. (1985) ‘Entering new businesses: selecting strategies for success in the biotechnology industry’, Strategic Management Journal, Vol. 15, pp.387–394. Robertson, P.L. and Langlois, R.N. (1995) ‘Innovation, networks and vertical integration’, Research Policy, Vol. 24, pp.543–562. Robinson, V.M.J. (1993) ‘Current controversies in action research’, Public Administration Quarterly, Vol. 17, No. 3, pp.263–290. Roper, S., Hewitt-Dundas, N. and Love, J.H. (2004) ‘An ex ante evaluation framework for the regional benefits of publicly supported R&D projects’, Research Policy, Vol. 33, pp.487–509. Saxenian, A. (1996) Regional Advantage: Culture and Competition in Silicon Valley and Route 128. Singh, J. (2004) Innovation and Knowledge Diffusion in the Global Economy, Harvard School of Business Administration and Department of Economics. Smith, G.P. and Reinertsen, D.G. (1995) Developing Products in Half the Time, Van Nostrand Reinhold. Snow, C.C., Miles, R.E. and Coleman, H.J., Jr. (1992) ‘Managing 21st century network organisations’, Organisational Dynamics, Winter. Steensma, H.K. (1996) ‘Acquiring technological competencies through inter-organisational collaboration: an organisational learning perspective’, J. Eng. Technolo. Manege., Vol. 12, pp.267–286. Susman, G.I. and Evered, R.D. (1978) ‘An assessment of the scientific merits of action research’, Administrative Science Quarterly, No. 23, pp.582–603. Thorelli, H.B. (1986) ‘Networks: between markets and hierarchies’, Strategic Management Journal, Vol. 7, pp.37–51. Tijssen, R.J.W. (1998) ‘Quantitative assessment of large heterogeneous R&D networks: the case of process engineering in the Netherlands’, Research Policy, Vol. 26, pp.791–809. Van Aken, J.E. and Weggeman, M.P. (2000) ‘Managing learning in informal innovations networks: overcoming the Daphne-dilemma’, R&D Management, Vol. 30, No. 2, pp.139–149. Van Aken, J.E., Hop, L. and Post, G.J.J. (1998) ‘The virtual organisation, a special mode of stronger inter-organisational co-operation’, in Hitt, M.A., Ricart, J.E. and Nixon, R.D. (Eds.): Managing Strategically in an Interconnected World, Wiley, Chichester. Van Leeuwen, T.N., Visser, M.S., Moed, H.F., Nederhof, T.J. and Van Raan, A.F.J. (2003) ‘The holy grail of science policy: exploring and combining bibliometric tools in search of scientific excellence’, Scientometrics, Vol. 57, No. 2, pp.257–280. Verganti, R., Landoni, P. and Salerno, M. (2004) La Valutazione Della Ricerca e del Trasferimento Tecnologico in Ambito Regionale, Guerini e Associate, Milano. Wagner, C.S. and Leydesdorff, L. (2005) ‘Network structure, self-organization, and the growth of international collaboration in science’, Research Policy, Vol. 34, pp.1608–1618. Webster, A. (1994) ‘University-corporate ties and the construction of research agendas’, Sociology, Vol. 28, No. l, pp.123–142. Yin, R.K. (1994) Case Study Research; Design and Method, Sage, Thousand Oaks, CA.

Suggest Documents