AN EXCEL-BASED DECISION SUPPORT SYSTEM FOR SCORING ...

10 downloads 624 Views 607KB Size Report
James Madison University, MSC 0202. Harrisonburg, VA 22807, USA [email protected]. One of the most challenging aspects of technology management is the ... Keywords: R&D project evaluation; R&D project selection; scoring; ranking; ...
August 27, 2008 16:55 WSPC/173-IJITDM

00302

International Journal of Information Technology & Decision Making Vol. 7, No. 3 (2008) 529–546 c World Scientific Publishing Company 

AN EXCEL-BASED DECISION SUPPORT SYSTEM FOR SCORING AND RANKING PROPOSED R&D PROJECTS

ANNE DE PIANTE HENRIKSEN∗ Integrated Science and Technology Department James Madison University MSC 4102 Harrisonburg, VA 22807, USA [email protected] SUSAN W. PALOCSAY Computer Information Systems and Management Science Department James Madison University, MSC 0202 Harrisonburg, VA 22807, USA [email protected]

One of the most challenging aspects of technology management is the selection of research and development (R&D) projects from among a group of proposals. This paper introduces an interactive, user-friendly decision support system for evaluating and ranking R&D projects and demonstrates its application on an example R&D program. It employs the scoring methodology developed by Henriksen and Traynor to provide a practical technique that considers both project merit and project cost in the evaluation process, while explicitly accounting for trade-offs among multiple decision criteria.1 The framework of the Excel-based system, PScore, is presented with an emphasis on the potential benefits of using this methodology with computer-automated extensions that facilitate and enhance managerial review and decision-making capabilities. Keywords: R&D project evaluation; R&D project selection; scoring; ranking; interactive decision support.

1. Introduction The challenge of allocating scarce resources to a group of proposed research and development (R&D) projects has been addressed in the literature for over 50 years. Many prescriptive approaches have been described ranging from simple checklists to complex mathematical programming models; however, very few software tools have been made available that could actually be used by decision-makers to aid in the process of identifying the “best” proposed projects. The job of identifying the “best” R&D projects poses many questions. What are the important criteria?2,3 How many projects should be funded?4 How do we ∗ Corresponding

author. 529

August 27, 2008 16:55 WSPC/173-IJITDM

530

00302

A. D. Henriksen & S. W. Palocsay

know which projects have the highest probability of success?5 How do we estimate the return a project will have if it is successful?6 Are we using the right decision support tools and methodologies to select R&D projects?7–12 Over the past five decades, a variety of technical approaches have been proposed to try to address these questions with quantitative rigor. According to the classification scheme of Henriksen and Traynor, R&D project selection methods can be categorized as unstructured peer review, scoring, mathematical programming, economic analysis, decision analysis, interactive approaches, artificial intelligence, and portfolio optimization.1 Many of these methods are mathematically complex and, as a result, they can pose formidable implementation challenges for R&D managers. Consequently, they have been largely ignored by real-world R&D organizations unless conducted by a trained analyst.7,8 Among these categories, scoring has been found to be the most widely used quantitative technique for evaluating R&D projects because of its flexibility, ease of use, ability to be both “homegrown” and personalized incorporation of qualitative criteria, and because it possesses a reasonable degree of rigor for the time and effort invested.13 Scoring, in this context, is the process of assigning ordinal scale values to R&D projects for the purpose of ranking the projects with respect to some criteria. The projects are ranked so that decision-makers can optimally distribute limited resources to those projects that the organization believes will bring maximum return. Scoring requires determining the criteria against which the candidate projects will be evaluated and establishing the response scale on which the evaluation will take place. Once each project is evaluated with respect to each criterion and assigned a rating on the evaluation scale, the results must be combined to obtain the ordinal scale score using some kind of mathematical computation. The purpose of this paper is to present a practical, user-friendly software tool, referred to as PScore, for scoring and ranking proposed R&D projects in the spreadsheet environment. It is based on the R&D project selection methodology developed by Henriksen and Traynor to explicitly account for the fact that the value of a proposed project is a function of both its merit and its cost, and to provide a rational basis for making trade-offs between multiple decision criteria.1 By implementing the method in Excel with a Visual Basic for Applications (VBA) graphical user interface, the process of entering data is structured, calculations are automated, and results are displayed in well-organized tables and various graphical charts designed for managerial review. Figure 1 shows an overview of PScore’s inputs and outputs. Inputs include the total available R&D program funding, the title and cost of each proposed project, the criteria weights for the particular R&D program, and the evaluators’ questionnaire responses for each proposed project. Outputs from PScore include a merit score, the relative (scaled) funds requested, and a value index for each candidate project, as well as the relative rankings of all the projects. The merit score assigned to a project depends on how well the project meets the

August 27, 2008 16:55 WSPC/173-IJITDM

00302

An Excel-Based Decision Support System for Scoring and Ranking

531

PScore

Fig. 1. Score R&D project selection software inputs and outputs.

pre-identified criteria, as determined by the peer review team, and the weights assigned to the criteria by the decision-making organization. The funds request (i.e. the cost) is scaled and then a value index is calculated for each proposed project to combine the weighted contributions from merit with the scaled funds request. The candidate projects are ranked for the purpose of resource allocation and other decision-making based on their relative scores. The overall goal of this process is to identify those projects with the highest relative “technical bang for the buck.” Further analysis to determine the sensitivity of project rankings to the primary decision objective and to the criteria weights is also supported by the software. In this paper, the process of using the PScore tool for project scoring and ranking is illustrated with an example R&D program. Brief explanations of the underlying methodology for project evaluation are included where appropriate. The paper also provides a discussion of the evaluation results from a managerial perspective, including the prudent interpretation of results, sensitivity analysis, and limitations of the selection process.

2. R&D Project Evaluation Using PScore PScore is a Microsoft Excel spreadsheet-based application with a graphical user interface constructed in the VBA programming language. VBA supports the creation of dialog boxes, message boxes, buttons, and forms, which then allow a user to enter data and perform other tasks in an automated manner without having to interact with the actual computer code. In PScore, input data on the R&D program and individual projects is used to generate merit scores and value indices for each proposed project, and results are ranked and displayed on an Excel worksheet

August 27, 2008 16:55 WSPC/173-IJITDM

532

00302

A. D. Henriksen & S. W. Palocsay

and charts. The example R&D program used here to demonstrate PScore consists of twelve projects and four evaluators with a range of merit scores and funding requests. The project data were specifically designed to illustrate a variety of funding and scoring situations, and all 12 projects are evaluated by each evaluator. From the perspective of the user, the general steps in applying PScore to an R&D program are shown below: (i) (ii) (iii) (iv) (v) (vi) (vii)

Initialize the R&D program evaluation process. Enter R&D program information. Specify decision criteria (5- or 6-question peer evaluation form). Specify decision criteria weights. Enter cost and peer review data for individual projects. Analyze scoring and ranking results. Perform sensitivity analysis and evaluate effect on scoring and ranking results.

Each of these steps is described in detail below and illustrated with accompanying screen shots of the PScore output. 2.1. R&D program initialization and information The Introduction form for PScore, shown in Fig. 2, allows the user to print the text of the introduction, view the dialog box for entering criteria weights, navigate to the Excel Summary worksheet to review results, view and print all four peerreview questionnaires from a separate Microsoft Word file, and run the application to initialize a new R&D program. “Run PScore” takes the user to the form shown in Fig. 3, the Main Menu. The forms in Figs. 2 and 3 are bypassed if the user is opening an existing program containing previously entered data; however, both forms can be accessed from the Summary worksheet. The evaluation process involves three main data input procedures: entering the new R&D program information, specifying the criteria weights, and filling in the proposal worksheets for each of the 12 candidate projects. These procedures are initiated from the Main Menu in Fig. 3 using “Begin a New Program,” which brings up the VBA form shown in Fig. 4. On this form, the user is asked to enter the title of the R&D program, the number of project proposals, and the maximum number of evaluators. The user must also indicate which of two available questionnaires will be used by the evaluators in their assessment of each project. 2.2. Decision criteria The decision criteria for R&D project evaluation are represented by the questions in the peer-review questionnaire. PScore evaluates R&D projects using four criteria: relevance of the proposed project to the institution’s mission and research objectives; level of risk to be incurred by the project; reasonableness of the stated outcomes based on available resources; and several kinds of return. There are two

August 27, 2008 16:55 WSPC/173-IJITDM

00302

An Excel-Based Decision Support System for Scoring and Ranking

533

Fig. 2. PScore introduction form.

different questionnaires corresponding to two possible funding situations: one questionnaire contains five questions and the other contains six questions. An example of the six-question questionnaire for initial project funding is shown in Appendix A. In some federal organizations, there is a component of return that concerns the institution’s programmatic goals, usually defined by current governmental interests. This additional component gives rise to an additional question concerning programmatic return. The five-question questionnaire, which omits any reference to programmatic return, was used for this example R&D program. Note that with minor word alterations, both questionnaires can also be used for evaluating the issue of continued funding of ongoing programs. Some of the decision criteria are independent, but other criteria are not independent. Some are “tradable,” meaning that when one criterion increases, the other one necessarily decreases, and vice versa. Return and probability of success, for example, are tradable criteria. Question responses pertaining to tradable criteria are added together in the computation of the merit score to maximize the mathematical result of mid-range responses. The tradable criteria terms form a group or cluster, which is then multiplied by the other criteria or criteria clusters. For criteria

August 27, 2008 16:55 WSPC/173-IJITDM

534

00302

A. D. Henriksen & S. W. Palocsay

Fig. 3. Main Menu dialog box for beginning a new R&D program evaluation.

that are independent, multiplying question responses (or question response clusters) has the desirable effect of minimizing the contribution of low-rated responses.

2.3. Decision criteria weights In the second data entry procedure, weights must be assigned to the criteria in each cluster to indicate their relative importance in computing scores for the proposed projects. One approach is to make comparisons of the criteria in each cluster in a pairwise manner, which is a very common approach to ranking the relative importance of decision criteria.14 As an example, Fig. 5 explains the pairwise comparison process for the Risk versus Return cluster. The criteria are weighted to discriminate for those projects that are the most appropriate for a particular type of R&D (e.g. basic versus applied) and that are the most consistent with the organization’s strategic goals and objectives when the project score is computed. For example, in basic research, the tolerability for risk (i.e. the probability that the intended outcome of the project may not be successful) is inherently higher than should be the case for applied research. Note that in the peer evaluation questionnaire, risk is assessed by asking for the probability that the project will be able to meet its stated goals and objectives. This results in a response scale that matches that of

August 27, 2008 16:55 WSPC/173-IJITDM

00302

An Excel-Based Decision Support System for Scoring and Ranking

Fig. 4. Dialog box for entering new R&D program information.

Fig. 5. Pairwise comparisons assigning weights to the risk vs return tradable criteria.

535

August 27, 2008 16:55 WSPC/173-IJITDM

536

00302

A. D. Henriksen & S. W. Palocsay

Fig. 6. Form for assigning criteria weights for the six-question case.

the other responses, i.e. “very high” being the most favorable response and “very low” being the least favorable. Once the criteria weights are determined, the VBA form for entering them is accessed from the Main Menu, as seen in Fig. 6. The weights within each tradable cluster of criteria will be normalized independently by PScore. For the example R&D program, all weights in Fig. 6 are set equal to a value of 1; that is, all weights within a criteria cluster have equal importance. When there are two members of a cluster, this results in each having a normalized weight of 0.50 in the scoring computations; when there are three members, each will have a normalized weight of 0.33. These weights can be changed later to see how the assigned values affect the project rankings. The last set of weights in Fig. 6 is used during the evaluation process, described in the next section, to make trade-offs between the merit score of a project and its funding request. 2.4. R&D project data entry The final data entry step is to fill in the proposal data and the evaluators’ questionnaire responses on the Excel worksheet for each of the candidate projects. The

August 27, 2008 16:55 WSPC/173-IJITDM

00302

An Excel-Based Decision Support System for Scoring and Ranking

537

Fig. 7. Excel worksheet for entering proposal data and evaluator questionnaire responses.

correct number of worksheets (i.e. 12 for the example R&D program) was created from a template when the new program information was entered earlier. The proposal worksheet for Project Number 1 is shown in Fig. 7 as an example. The user enters the project title, the funding request, and the four sets of evaluator responses using a 5-point Likert scale where 1 corresponds to “very high” and 5 corresponds to “very low.” The “Calculate” button on this worksheet then computes the merit score, referred to as the “figure of merit.” 3. Discussion of Scoring and Ranking Results PScore presents the program results both as a summary table and as charts. The summary table contains menu items that can be used to alter the program inputs and project data. Charts are used to graphically present the results and visually illustrate the merit and cost relationships between the projects. 3.1. Calculation of merit score and value index The merit score (i.e. figure of merit) is strictly a relative measure, on a scale of 0 to 1, of how well a proposed project meets the four criteria of relevance, risk, reasonableness, and return. To obtain the merit score, the average of the evaluators’ question responses are used together with the weights determined by the pairwise comparison process. The merit score equation for project i is Si =

1 w11 w12 w w42 w13 {q q [w21 q2i + w22 (w31 q4i + w32 q5i41 q6i )] − 1}. 4 1i 3i

(1)

August 27, 2008 16:55 WSPC/173-IJITDM

538

00302

A. D. Henriksen & S. W. Palocsay

In this equation for project i, qni is the average response value of the peer review team members for the nth question, and wjk is the normalized weight on the kth criterion in the jth cluster. When there are only five questions, question 5’s weight w41 is assigned a value of 0, and so it does not contribute to the score. Since question responses map to a 1–5 Likert scale, the scores resulting from this equation will be between 0 and 1. All program data and project results are presented in a consolidated Excel worksheet named Summary, which is shown in Fig. 8 for the example R&D program. Note that modifications can be made directly from this worksheet, such as adding a new proposal or new evaluator and changing the criteria weights. In addition to the merit score, two other measures are computed for each project. One is the scaled funds request, whose purpose is twofold: first, to state the amount of the funds requested in the same order of magnitude as the merit score, and second, to create a relative scale on which the amount of resources requested can be compared. The equation for this measure corresponding to project i is fi = 1 +

reqi − min . (max − min)/4

(2)

In this equation, reqi is the dollar amount requested by project i, min is the minimum request from all projects, and max is the maximum request from all projects. This equation simply performs a linear mapping of the amount requested to a scale of 1–5 so that the decision-maker can take into consideration the relative amount of resources requested to accomplish the stated technical objectives.

Fig. 8. Excel Summary worksheet containing program data and individual proposed project results.

August 27, 2008 16:55 WSPC/173-IJITDM

00302

An Excel-Based Decision Support System for Scoring and Ranking

539

While the merit of a proposed project is important, organizations are also interested in the value of a project, and this is provided on the Summary worksheet for each project. Incorporating the concept of value in selecting R&D projects implies that due consideration has been given to the amount of funds required to achieve the project’s stated technical objectives. Since value is a function of both merit and cost, there should be some way to account for projects with equal technical merit when one costs significantly more than the other. For this reason, the merit score and the scaled funds request are combined into a single expression called the value index, Vi , computed as   4Si + 1 b Vi = aSi + − 0.2 . (3) 4.8 fi In this equation, a and b are normalized weights that reflect the relative importance of the merit versus the cost term, Si is the merit score for project i, and fi is the scaled funds request for project i. For the example R&D program, these weights were set to a value of 1, as shown in Fig. 6. The mathematical manipulations shift and scale the numerical result so that the value index, like the merit score, ranges from 0 to 1. The results of the analysis for the R&D program can be sorted to show the rankings of the project proposals based on any of these measures, in ascending or descending order. For example, the projects can be sorted in descending order by value index, as is the case in Fig. 8. Summary results can also be viewed graphically with pre-designed Excel charts that are accessible from the Summary worksheet. 3.2. Graphical presentation of results PScore can automatically generate several different charts as a visual management decision-making and discussion aid. There is a scatter chart that displays the scaled funds request as a function of the merit score (figure of merit), which is shown in Fig. 9 for the example R&D program. Figure 9 readily indicates which projects are of high technical quality and very cost-effective, which projects are not, and which projects are not as straightforward to classify. Quadrant labels have been added to Fig. 9 to indicate these categories for the purposes of this discussion. Proposed projects in quadrant IV offer the biggest “bang for the buck” because they are of high merit and relatively low cost. Projects in quadrant II are conversely the least desirable for the organization. Projects in quadrants I and III will require further consideration to determine their status. A chart that plots both the value indices and the figure of merit by project proposal number is also available, as shown in Fig. 10. It depicts graphically how the merit score is affected by the magnitude of the cost of the work. This chart shows that for large values of the scaled funds request, the value index will be significantly reduced relative to the merit score. A third chart (not shown) displays the scaled funds request and merit score by proposal number. It is identical in form

August 27, 2008 16:55 WSPC/173-IJITDM

540

00302

A. D. Henriksen & S. W. Palocsay

Fig. 9. Chart showing scaled funds request vs figure of merit for example R&D program.

Fig. 10. Chart showing value index and figure of merit by proposal number for the example R&D program.

August 27, 2008 16:55 WSPC/173-IJITDM

00302

An Excel-Based Decision Support System for Scoring and Ranking

541

to the chart in Fig. 10 except that the scaled funds request instead of the value index is plotted on the vertical axis. This chart allows decision-makers to see at a glance which proposals have the greatest technical merit and which proposals are the most expensive.

4. Management Decision-Making Considerations The R&D project selection process has two main purposes: first, to obtain data on the relative merit of the candidate projects, and second, to facilitate discussion about what the organization truly values in its R&D program. The results of the analysis are intended primarily to inform and guide decision-makers, but not to make the decision for them. However, it is the process of making the decision and discussing the rationale for that decision that should be the most instructive and beneficial for the organization.

4.1. Interpretation of results Of the 12 projects in the example R&D program, project 5 is the clear winner based on both the merit score and value index. This occurs because it has the highest merit score possible and the lowest funding request of all the projects. Projects such as project 5 that are clear winners appear in the extreme region of quadrant IV in Fig. 9; these projects should require no further deliberation to be selected. Conversely, project 6 has a low merit score and a high relative cost, which places it in the extreme region of quadrant II in Fig. 9. Projects in this category should require no further deliberation to be discarded. However, sometimes a project will have a high merit score and a relatively high funding request, as is the case for project 9. This places project 9 in the extreme region of quadrant I in Fig. 9. A high relative cost will cause a project to have a value index ranking that is lower than its merit rank. Projects in the extreme region of quadrant I oftentimes represent efforts in the early cost-intensive phases of R&D that could result in significant scientific breakthroughs. It is important for an organization to have some of these kinds of projects in its R&D portfolio; therefore, the decision regarding projects in the extreme region of quadrant I should be carefully evaluated by decision-makers. On the other hand, projects that appear in the extreme region of quadrant III, even though they are relatively inexpensive, have low merit scores. Low cost is never an adequate compensation for lack of merit, and projects in the extreme region of quadrant III should require no further consideration to be discarded. Decisions regarding nonextreme cases of merit and cost are not as unambiguous. In these cases, decision-makers can rely on information provided by both the merit score and value index rankings. Table 1 shows the rankings for the 12 projects in the example program based on merit and value. Six of the twelve projects change position when their value rank is compared to their merit rank. This kind of

August 27, 2008 16:55 WSPC/173-IJITDM

542

00302

A. D. Henriksen & S. W. Palocsay Table 1. Comparison of merit and value rankings for projects in the example program.

merit-to-value comparison is useful to assess how relative cost impacts the concept of total value. Another useful exercise is to compare the merit score to the corresponding value index across all projects. The chart in Fig. 10 displays this information by proposal number. For the majority of projects in the example R&D program, the value index falls below the merit score. The magnitude of the separation between the two is related to the magnitude of the funds request; however, this effect is not linear because the impact of the funds request on the value index is more pronounced when the merit score and the funds request are both relatively high. As a case in point, projects 2 and 4 have the same funds request, but project 2 has a merit score of 0.905 and project 4 has a merit score of 0.301. However, the separation in Fig. 10 between the merit score and value index for project 2 is more than three times greater than it is for project 4. A similar situation exists for projects 1 and 12 and projects 3 and 10. 4.2. Sensitivity analysis As part of managerial decision-making, it is important to address the issue of the sensitivity of the rankings to the decision objective and to the criteria weights. In particular, it is desirable that the level of sensitivity strikes a balance between being very sensitive to minor differences in the criteria weights and being totally insensitive to major differences in the criteria weights. Table 2 contains the results of an analysis of how the projects in the example R&D program change rank when incremental changes are made to the relative values of the criteria weights. Recall that all of the weights were equal to 1 in determining the initial rankings. From this data, we can see that the merit score

August 27, 2008 16:55 WSPC/173-IJITDM

00302

An Excel-Based Decision Support System for Scoring and Ranking

543

Table 2. Number of proposed projects that change rank with a change in criteria weights. Number of Projects That Change Rank Probability of Success

Return

1 1 1 1

2 3 4 5

Probability of Success

Return

2 3 4 5

1 1 1 1

Merit Term a= a= a= a= a= a= a= a=

2 3 4 5 1 1 1 1

Basic Research Weight Scheme Scientific Business Number That Change Return Return Merit Rank 2 3 4 5

1 1 1 1

Number That Change Value Rank

0 2 2 4

2 2 2 2

Applied Research Weight Scheme Scientific Business Number That Change Return Return Merit Rank 1 1 1 1

2 3 4 5

Number That Change Value Rank

0 0 0 0

0 0 0 0

Value Index Merit Term vs Cost Term Weight Scheme Cost Term Number That Change b=1 b=1 b=1 b=1 b=2 b=3 b=4 b=5

4 4 4 4 3 3 5 5

and value index rankings for these proposed projects are much more sensitive to criteria weights that are skewed toward a basic research scheme than to criteria weights that are skewed toward an applied research scheme. Table 2 also indicates that the relative project rankings are fairly sensitive to changes in the relative weighting of the merit term versus the cost term of the value index equation. These particular sensitivities could differ if the evaluator responses were different, that is, a different set of evaluator responses could exhibit different sensitivities to alterations in the weights. Since the intention of criteria weights is that they effect the desired character of the R&D portfolio (i.e. cause it to become basic research or applied research in nature), then a robust but reasonable sensitivity to the weight scheme is essential. 4.3. Limitations of project selection process Scoring does have some disadvantages relative to other, more rigorous selection techniques. Scoring does not measure utility; therefore, it does not assign any measure of decision-maker value to the projects. Scoring provides only the capability to rank projects on an ordinal scale; a score for one proposal that is twice as big as the

August 27, 2008 16:55 WSPC/173-IJITDM

544

00302

A. D. Henriksen & S. W. Palocsay

score for a second proposal does not mean that the first proposal is twice as good as the second proposal. Equations for scoring computations must be constructed so that they correctly reflect the interaction of tradable criteria; if they are not, then the results of the scoring exercise may not accurately represent the organization’s intentions. Scoring is not an R&D project selection method that, in general, handles quantitative measures of financial return. This would be a disadvantage in the late stages of the development phase where financial return may be the single most important decision criterion. 5. Summary This paper presented PScore, an Excel-based R&D project selection decision support tool that evaluates and ranks R&D projects by scoring. It can be used to aid in making decisions regarding initial and/or continued resource allocation. PScore is based on the method developed by Henriksen and Traynor, extended and enhanced with numerous features that make it practical to utilize in a real-world setting.1 The underlying equations are programmed into the application so that a user does not need to know the mathematical details of the method to effectively use PScore. PScore ranks R&D projects using four criteria: relevance of the proposed project to the institution mission and research objectives; level of risk to be incurred by the project; reasonableness of the stated outcomes based on available resources; and several kinds of return. A questionnaire is used by peer-review teams to evaluate all proposed projects against these criteria. The criteria are weighted relative to one another to emphasize the perceived importance by the institution of some criteria over others and then this relative weight information is included in the scoring computations. In addition to evaluating the intrinsic merit of a proposed project based on the four criteria, the value indices of the projects explicitly take into account the magnitude of the resources requested to complete the project. Since any objective can conceivably be accomplished with infinite resources, this approach encourages realistic and reasonably achievable, cost-effective projects. Several additional extensions to PScore are under consideration and include the following: • Questionnaires customized to other types of projects with appropriate revised weighting structure and scoring algorithm. • Incorporation of the dispersion of the questionnaire responses into the analysis. • Explicit consideration of quantitative financial return. Readers who are interested in obtaining a demonstration version of PScore should contact the first author. Acknowledgments We would like to acknowledge the assistance of Ms Jaclyn Tripken in the initial phases of PScore development. The authors thank the J. W. and Alice S. Marriott

August 27, 2008 16:55 WSPC/173-IJITDM

00302

An Excel-Based Decision Support System for Scoring and Ranking

545

Foundation for a professional development grant in the initial stages of this work. The authors would also like to thank two anonymous reviewers for their excellent suggestions.

Appendix A. Peer-Review Questionnaire This appendix provides the six-question form to be completed by peer evaluators for initial project funding requests (Fig. A.1). Question 5 on programmatic return is omitted for the five-question version of the form.

PROJECT SELECTION QUESTIONNAIRE FOR INITIAL FUNDING (For each question, please circle one answer) Project Number/Title: 1. RELEVANCE : What is the degree to which you believe this proposed project supports this organization’s mission and objectives? A. Very high D. Low B. High E. Very low C. Average 2. RISK : What do you believe the probability is that this proposal/project can successfully achieve its stated scientific/technical objectives? A. Very high D. Low B. High E. Very low C. Average 3. REASONABLENESS: What do you believe the probability is that this proposal/ project could successfully achieve its stated scientific/technical objectives on time and on budget with the requested level of resources? A. Very high D. Low B. High E. Very low C. Average 4. BASIC RESEARCH RETURN : Rate the potential scientific/technical impact of this proposed project assuming it is successful. A. Very high D. Low B. High E. Very low C. Average 5. PROGRAMMATIC RESEARCH RETURN : Rate the potential programmatic impact of this proposed project assuming it is successful. A. Very high D. Low B. High E. Very low C. Average 6. BUSINESS RETURN : Rate the potential business return of this proposed project assuming it is successful. A. Very high D. Low B. High E. Very low C. Average Fig. A.1.

August 27, 2008 16:55 WSPC/173-IJITDM

546

00302

A. D. Henriksen & S. W. Palocsay

References 1. A. D. Henriksen and A. J. Traynor, A practical R&D project-selection scoring tool, IEEE Transactions on Engineering Management 46(2) (1999) 158–170. 2. H. A. Averch, Criteria for evaluating research projects and portfolios, in Assessing R&D Impacts: Methods and Practice, eds. B. Bozeman and J. Melkers (Kluwer Academic Publishers, Norwell, 1993), pp. 264–277. 3. A. Salo and J. Liesi¨ o, A case study in participatory priority setting for a Scandinavian research program, International Journal of Information Technology and Decision Making 5(1) (2006) 65–88. 4. E. B. Lieb, How many R&D projects to develop? IEEE Transactions on Engineering Management 45(1) (1998) 73–77. 5. J. Davis, A. Fusfeld, E. Scriven and G. Tritle, Determining a project’s probability of success, Research — Technology Management 44(3) (2001) 51–57. 6. R. F. Bordley, R&D project selection vs. R&D project generation, IEEE Transactions on Engineering Management 45(4) (1998) 407–413. 7. C. Cabral-Cardoso and R. L. Payne, Instrumental and supportive use of formal selection methods in R&D project selection, IEEE Transactions on Engineering Management 43(4) (1996) 402–410. 8. M. J. Liberatore and G. J. Titus, The practice of management science in R&D project management, Management Science 29(8) (1983) 962–974. 9. M. M. Menke, Improving R&D decisions and execution, Research — Technology Management 37(5) (1994) 25–32. 10. O. C. T. Onesime, X. Xiaofei and Z. Dechen, A decision support system for supplier selection process, International Journal of Information Technology and Decision Making 3(3) (2004) 453–470. 11. W. E. Souder, A scoring methodology for assessing the suitability of management science models, Management Science 18(10) (1972) B526–B543. 12. N. S. Thomaidis, N. Nikitakos and G. D. Dounias, The evaluation of information technology projects: A fuzzy multicriteria decision-making approach, International Journal of Information Technology and Decision Making 5(1) (2006) 89–122. 13. L. W. Steele, Selecting R&D programs and objectives, Research — Technology Management 31(2) (1988) 17–36. 14. S. Zahir, Eliciting ratio preferences for the Analytic Hierarchy Process with visual interfaces: A new mode of preference measurement, International Journal of Information Technology and Decision Making 5(2) (2006) 245–261.

Suggest Documents