Available online at www.sciencedirect.com
International Journal of Project Management 31 (2013) 532 – 541 www.elsevier.com/locate/ijproman
A decision-making support system module for project manager selection according to past performance Yossi Hadad, Baruch Keren, Zohar Laslo ⁎ Industrial Engineering and Management Department, SCE — Shamoon College of Engineering, Beer Sheva, Israel Received 14 March 2012; received in revised form 21 August 2012; accepted 4 October 2012
Abstract This paper proposes a decision-making support system (DMSS) module for selecting project managers and demonstrates its implementation. The selection of a new project manager is based mainly on the past performance of potential managers, for example, on the relative performance evaluations they have received on projects managed by them in the past. Past projects are ranked in accordance with a ranking method. Project managers are ordered according to past project rank. The difference in quality between the past performance of the candidates is statistically examined using the Mann–Whitney–Wilcoxon U test. This enables the establishment of a subgroup of one or more preferred candidates, where the significance level of the statistical test has an impact on the subgroup size. The final candidate may be selected from this subgroup according to personal qualifications and suitability for the specific project. We demonstrated the use of such a DMSS module by an Israeli information technology company as part of their process to select a project manager. A ranking method within the Data Envelopment Analysis context (the Cross-Efficient method) was implemented with three inputs and four outputs selected for the project ranking. © 2012 Elsevier Ltd. APM and IPMA. All rights reserved. Keywords: Project management (PM); Ranking methods (RM); Mann–Whitney–Wilcoxon U test (MWW)
1. Introduction Successful project outcomes depend on several critical success factors, including the involvement of a suitable and qualified project team, and a competent project manager with good leadership skills (Fortune and White, 2006). In the selection of the project team, the ability of individuals to meet the project's legal, technical, and experience requirements should be taken into consideration, as well as their ability to develop social ties and facilitate group interactions (Ballsteros-Perez et al., 2012). Yang et al. (2011) found that project type moderates the relationship between certain teamwork aspects and overall project success, while an increase in the level of leadership may enhance the relationships between team members. Finally, they found that teamwork has a statistically significant effect on project ⁎ Corresponding author. Tel.: + 972 8 6475640; fax: +972 8 6475643. E-mail addresses:
[email protected] (Y. Hadad),
[email protected] (B. Keren),
[email protected] (Z. Laslo). 0263-7863/$36.00 © 2012 Elsevier Ltd. APM and IPMA. All rights reserved. http://dx.doi.org/10.1016/j.ijproman.2012.10.004
performance. For this reason, the selection of the project manager has a significant impact on successful accomplishment of the project. The project manager selection process includes the consideration of many criteria. Thus, it is crucial to characterize appropriate and measurable criteria so as to streamline the process. The project manager selection process has been widely discussed in the literature. Zavadskas et al. (2008) surveyed 23 papers and proposed a set of criteria for the selection of construction managers. They affirmed that the criteria most often taken into consideration are the candidate's personal skills, project management skills, and experience in similar projects. El-Sabaa (2001) conducted a survey concerning the skills of the most successful project managers and clustered them into three main categories: human skills, conceptual and organizational skills, and technical skills. Further, Hauschildt et al. (2000) identified five types of project managers. They calculated the prevalence of each type and examined the success levels of project managers classified to each type. The authors suggest that this typology has the potential to enable more effective
Y. Hadad et al. / International Journal of Project Management 31 (2013) 532–541
project manager selection. In addition, it allows organizations to maintain a team of project managers optimal for its various needs, as the analysis of a candidate's experience and past performance is often performed prior to formal job offers. Muller and Turner (2007) claimed that the past performance of project managers depends on their competence, particularly their leadership style, which is comprised of emotional intelligence, management focus, and intellect. When there are a number of candidates under consideration for a particular project, the candidates' personal past performance records may be used as general informative measures for project manager selection, because they reflect most of the criteria that should be considered. In a relevant study, Cheng et al. (2005) concluded that job-task competencies are specific to a given project. However, when selecting project managers it is often ideal to consider a candidate's past performance as well as his or her suitability for the specific project. Employers welcome tools for analyzing the past project performances of prospective candidates. However, given the complex nature of such projects, obtaining a detailed evaluation of a project's success or failure is a difficult task (Ogunlana et al., 2002). In fact, project performance evaluation is a challenging multiple-criteria problem. Once the criteria are determined, they must be weighted to reflect the organization's preferences (Eilat et al., 2006). Often, project performance is measured by the degree to which time, cost, and technical specifications are fulfilled. In this paper, a decision-making support system (DMSS) module for selecting project managers is proposed. Initially, this DMSS module calculates and ranks the performance scores achieved in previously accomplished projects. Next, the ranked projects are clustered into categories according to each project manager, for whom an average score is then calculated based on the projects he or she performed. In order to test the significance of the difference between any two grades, the Mann–Whitney– Wilcoxon U test (MWW) is carried out (a pairwise comparison). The decision makers set the appropriate significance level, after which a subgroup of the most suitable project managers is established. If the subgroup includes just one candidate, the selection is clear. If the subgroup includes several candidates, the project manager is selected from the subgroup according to personal skills, project management and technical skills, experience in similar projects, and suitability for the specific project. The paper is constructed as follows: Section 2 describes the project ranking, Section 3 describes the project managers' ranking, Section 4 describes an implementation of the proposed DMSS module in real-life, and in Section 5 the paper is summarized. 2. Project ranking 2.1. The sampled set of past projects Project ranking is performed on a set of record projects that have been accomplished by project managers with history of managing organizational projects. Obviously, a heterogeneous set of projects in terms of characteristics such as type of project, location, technical, or financial complexity, may cause a bias in the results. However,
533
an improved significance level of the DMSS results can be obtained as far as the compared projects are more similar. Additional attention should be paid to the argument that there are special reasons for a project's success or failure that are beyond the performance of the project manager. Project management cannot be isolated from the environment and it is mostly affected by external disturbances. An essential role of the project manager is to cope with such disturbances by control and continuous reestablishment of the project targets. Such disturbances may insert additional unavoidable but mostly insignificant, bias to the results that can be reduced by determining criteria such as stabilization coefficients as performance inputs (see, for example, inputs determination in Section 4.2). It is important to avoid (as much as possible) including in the same set those past projects that were performed under the following conditions: 1) During a change of organizational and market conditions when in order to reach optimum performance there was a necessity to increase or reduce the relative influence of the project managers within the organization (Laslo and Goldberg, 2001). 2) During a period of organizational instability when the organizational participants are struggling on the determination of resource allocation polices (Laslo and Goldberg, 2008). However, if the set of past projects was performed within a more stable framework, the DMSS results may be less biased. In order to guarantee the accuracy of the results obtained by the DMSS module, outlier projects must be eliminated from the data sample. Often, in a data sample, there are a few extreme projects that have relationships among their metrics deviating substantially from those among the metrics for the remaining mainstream bulk of projects in the data sample (Chan and Wong, 2007). The direct consequence is degraded evaluation accuracy provided by the constructed model. To overcome this problem, Chan and Wong (2007) proposed a methodology to identify and thus eliminate such outlier projects prior to model construction. However, the practice of elimination of outlier projects from samples is commonly used. For example, see Chan and Wong (2007), Barros et al. (2004), and Seo et al. (2008). In the case of an exceptional project failure, the decision maker should consider whether the fault is of the candidate and if it reflects his inadequate professionalism as project manager. If the answer is positive the decision conclusion should mostly be to eliminate this project manager from the initial list of candidates and remove that manager's past projects from the set of projects to be ranked. 2.2. Ranking methods The literature surveys introduce a variety of ranking methods for project performance, among them Zopounidis and Doumpos (2002), Adler et al. (2002), Hadad, and Hanani (2011). For our
534
Y. Hadad et al. / International Journal of Project Management 31 (2013) 532–541
purpose, appropriate ranking methods should fulfill the following requirements: 1) Support Multi-Criteria Decision Analysis (MCDA). 2) Use objective common weights for each determined criterion (for all the projects). 3) Be supported by available software that enables the practitioner to obtain criteria values, weights, scores, and rankings. Not all the ranking methods meet these requirements (such as Analytical Hierarchical Process (AHP) (Saaty, 1980) and the ELECTRE (Roy, 1990)). However, the Cross-Efficiency (CE) ranking method (Sexton et al., 1986) in the Data Envelopment Analysis (DEA) context is among those methods that fulfill these requirements. DEA (the CCR model) was first developed by Charnes et al. (1978). DEA is a non-parametric mathematical programming approach used to evaluate the relative efficiency of Decision-Making Units (DMUs) on the basis of multiple inputs and outputs. The relative efficiency is defined as the ratio between a weighted output and a weighted input. The weights are chosen so as to maximize the relative efficiency of each DMU, under the restriction of 100% efficiency. The weights are obtained by solving a linear programming problem. When a DMU receives an efficiency score of 100% after it is optimally weighted, it is considered efficient; a score of less than 100% is considered inefficient. DEA does not use common weights (as do multiple-criteria decision models) and the optimal weights for inputs and outputs are defined individually by each DMU. All ranking methods in the DEA context require dividing the criteria to inputs and outputs. DEA is considered as a useful tool and is broadly used in the public and private sectors (an extensive review of DEA applications is provided by Seiford (1996), with over 3200 references). Recently, numerous articles have been published on a variety of DEA implementations. Among these are El-Mashaleh et al. (2010), Cao and Hoffman (2011), Natarajan et al. (2011), and Ghapanchi et al. (2012). Several papers describe the use of DEA for comparing project efficiency in a multi-project environment. For example, Vitner et al. (2006) defined objective criteria for project comparisons, such as cost, work content, Schedule Performance Index (SPI), and Cost Performance Index (CPI). In addition, they also defined subjective criteria, such as the level of project monitoring and the level of uncertainty in meeting project targets. Eilat et al. (2006) evaluated R&D projects using integrated DEA and a balanced scorecard approach, based on objective and subjective criteria. Mahmood et al. (1996) used DEA to measure the productivity of software projects. Their input criteria were the time spent on analysis, design, coding, integration, and quality control, and the output criteria were the number of lines of code and the number of routines. The authors selected these variables because they are quantifiable, and so may be consistently measured and collected. The CE ranking method implements the CCR model by solving n linear problems to obtain the optimal input and output weights for each DMU. Then it calculates the efficiency of each
DMU according to the optimal weights of all the DMUs (the product of these calculation is a n × n cross-evaluation matrix). The method scores each DMU by calculating the average of its efficiencies, and ranks them according their scores, in which a higher score signifies a higher rank. 2.3. Criteria that can be defined as inputs and outputs Many criteria can be used to compare the relative performance success of completed projects. Asosheh et al. (2010) surveyed six papers concerning IT project management and discussed about 30 criteria that they reviewed. Evaluation criteria can be classified into the following sets: 1) Criteria regarding the project's allocated budget and actual costs, such as hardware and software costs, CPI, earned value, and cash flow. 2) Criteria regarding the project's resources and their consumption, such as labor hours and other human resources. 3) Criteria regarding the project's time span, such as completion time, training time, and SPI. 4) Criteria regarding the project's risks, such as risk score, complexity, and potential risk. Thus, inputs and outputs for establishing the DMSS module for comparing the relative performance success of completed projects can be referred to a large assortment of criteria. However, each input and output must be measureable and related only to the project performances. Personal attributes of the candidates (leadership, age, and so on) may be essential criteria in the selection of a project manager, but such criteria should be considered in the stage of determining the initial group of candidates, before the implementation of the DMSS module, or afterward, in the stage of selecting the project manager from the preferred subgroup according to personal qualifications and suitability for the specific project. A preserved database of past projects is not always faultless. If any technique for restoring at least the approximations of the missing data is unavailable, the option of selecting the desired inputs and outputs is restricted to availability of the criteria values. 3. Candidate ranking In order to rank potential project managers according to their performance, a seven-step procedure is proposed. The individual project scores are calculated in steps 1–4 (first stage), and candidate ranks are calculated in steps 5–7 (second stage). Step 1: Determine the projects of each candidate. Step 2: Choose the ranking method and determine the performance criteria divided into inputs and outputs. Step 3: Calculate or determine the input and output values. Step 4: Calculate the scores of each project according to the selected ranking method. Rank the projects according to their scores in descending order.
Y. Hadad et al. / International Journal of Project Management 31 (2013) 532–541
Step 5: Calculate the grade of each candidate according to the average rank of their projects (as calculated in step 4). Rank the candidates according to their grades, in ascending order (the candidate with the lowest grade will be at the top, followed be the candidate with the second lowest grade, etc.). Step 6: Use the MWW U test (a one-tailed test) to test the difference between the grades of any two candidates and compute the lower half of the triangular pairwise comparison matrix. This matrix is sorted with the grades in ascending order. Step 7: Select the subgroup of the preferred managers. 7.1 Set the significance level α, above which the decision-makers will not reject the hypothesis that two candidates have the same performance. Note that a smaller value of α may increase the subgroup of the preferred candidates. 7.2 For each row of the triangular pairwise matrix (computed at step 6), check if one (or more) of the row values is less than α (for example, if there is at least one candidate with significantly higher past performance). If not (all the row values are equal to α or larger), the candidate under consideration will be included in the subgroup of the preferred candidates. This procedure can be easily performed in Microsoft Excel with the “if” command. The procedure outlined above thus establishes the subgroup of preferred candidates, from which new project managers will be selected. This selection process guarantees that there are no other candidates with significantly better performance records than the chosen candidate.
535
module, in the business framework of a large software company that develops and implements IT projects. Over the past eight years, fifty-two moderately-sized projects were managed by eleven project managers, all of whom were candidates for the management of a new project. 4.2. The definition of inputs and outputs
4. An example of implementation
Criteria related to risks in project management are widely discussed in the literature. (See Asosheh et al. (2010), Vitner et al. (2006), and Eilat et al. (2006)). In these works, risk criteria are subjectively estimated, for example, by an Analytic Hierarchy Process (AHP) or by the Delphi inquiry. Khorramshahgol et al. (1988) proposed an improved methodology for goal programming in project evaluation and selection. In their view, a project is a risky investment, and decision-makers can limit risk by setting an upper bound for the project's estimated cost coefficient of variation σ(Ci)/E(Ci), where E(Ci) and σ(Ci) are the i-th project's expected cost and its cost standard deviation, respectively. Thus, the coefficient of variation is widely used as a quantitative measure for risk in financial investments (Levy and Sarnat (1995) present a good example). Project risks should be evaluated in advance. Thus, the total project's cost and completion time, and their possible deviations, should be estimated before the project is released for execution (Laslo and Gurevich, 2011b). In general, methodologies such as those described by Khorramshahgol et al. (1988) are based on estimations related to the project as a whole. Alternatively, Laslo and Gurevich (2011a) argued that Monte Carlo simulations based on detailed estimations of each project activity's cost and time distributions may require more effort, but provide more reliable estimations of project cost and time expectations, as well as their variance. The risk criteria of completed projects are no longer of interest to the organization. Thus, the past cost and completion time standard deviation estimates provided for such criteria are mostly discarded. For such situations, we propose substitutes for the cost and completion time standard deviation estimates of the j activity, σ(Cj) and σ(Tj), respectively. The planned activity cost and duration are represented by their expected values E(Cj) and E(Tj), respectively. Finally, the actual activity cost and the actual activity duration are represented by their most likely values, m(Cj) and m(Tj), respectively. PERT (Project Evaluation and Review Technique) assumes that activity durations are beta-distributed (Kelley and Walker (1959), Kelley (1961)). Laslo (2003) showed that if in cost price terms the activity duration is beta-distributed, then the activity cost is also beta-distributed. In the case of fixed-price terms, where the actual cost is equal to the budget, the activity cost can be considered beta-distributed but with zero standard deviation. Golenko-Ginzburg (1988) assumed the following for beta distribution:
4.1. The candidates and their accomplished projects
b xj −m xj ¼ 2 m xj −a xj :
The following case study describes the implementation of a project manager selection process using the proposed DMSS
In this formulation, where a(xj) denotes the optimistic estimate of the distributed parameter xj, m(xj) is the most likely
Note that the preferred candidate subgroup will always include at least one manager (the manager with the highest ranking). If the preferred candidate subgroup includes only one candidate, this candidate will be selected. If the subgroup includes several candidates, the final selection from this subgroup will be chosen according to the candidates' personal qualifications and their suitability for the specific project. Within the preferred candidate subgroup, the differences between the candidates are not statistically significant, so that any candidate from this subgroup can be selected with confidence. The choice between them may take additional criteria into account, beyond past performance. Such additional criteria may include education level, age, personal skills, project management skills, seniority, and others, such as those discussed by Zavadskas et al. (2008). The relative weight of each criterion may be subjectively determined by the decision makers; alternatively, they can employ tools such as the AHP (Saaty, 1980).
ð1Þ
536
Y. Hadad et al. / International Journal of Project Management 31 (2013) 532–541
estimate, and b(xj) the pessimistic estimate, the following holds true: 3a xj þ 2b xj ; E xj ¼ 5 ð2Þ b xj −a xj σ xj ¼ : 5 A substitute for the standard deviation of a beta-distributed parameter, such as activity cost and activity duration, can be defined by the values that represent their planned and actual values: σ xj ¼ 3 E xj −m xj : ð3Þ Thus, with rational estimates for the expected activity costs, E(Cj), expected activity durations, E(Tj), and their standard deviations, σ(Cj) and σ(Tj), respectively, a set of good estimates can be obtained by using a Monte Carlo simulation procedure. This set of estimates includes the expected project cost E(Ci), the expected project completion time E(Ti), and their standard deviations, σ(Ci) and σ(Ti), respectively (Laslo and Gurevich, 2011b). Liu and Zheng (1989) defined the stabilization coefficient as the reciprocal of the variation coefficient; the stabilization coefficient is well known in the scientific literature, (see Conway (1989) and Lintnert (1965) for examples). Cost and time stabilities are positive inputs. Thus, the following inputs have been chosen for the proposed DMSS: Input 1: Input 1 is defined as the cost stabilization coefficient of project i. X 1i ¼
E ðC i Þ : σ ðC i Þ
ð4Þ
Input 2: Input 2 is defined as the completion time stabilization coefficient of project i. X 2i ¼
EðT i Þ : σ ðT i Þ
ð5Þ
Note that the project manager has no influence on the stabilization coefficients (reciprocals of the risks) of his project. Moreover, a more stable project is expected to yield higher outputs. Therefore, X1i and X2i must be considered as inputs. The project's intensity (expenditure rate) is calculated by its estimated cost, represented by E(Ci), and divided by its expected completion time E(Ti). Obviously, a lower intensity requires less extensive managerial efforts, and the reciprocal of the intensity should be considered as an input. Thus, the following input has been added: Input 3: Input 3 is defined as the reciprocal of the intensity of project i. X 3i ¼
E ðT i Þ : E ðC i Þ
ð6Þ
The actual project cost, excluding the implementation expenses e(Ci), and the actual project completion time, excluding the implementation duration e(Ti), are both affected by the project manager's performance. The reciprocal ratio between the expected (planned) project cost E(Ci) and the actual project cost, and the reciprocal ratio between the expected (planned) project completion time E(Ti) and the actual project completion time e(Ti), both reflect the manager's contributions to the project's success. Thus, the following outputs were chosen: Output 1: Output 1 is defined as the ratio between the expected cost and the actual cost, excluding the implementation expenses of project i. Y 1i ¼
E ðC i Þ : e ðC i Þ
ð7Þ
Output 2: Output 2 is defined as the ratio between the expected completion time and the actual completion time, excluding the implementation duration of project i. Y 2i ¼
E ðT i Þ : e ðT i Þ
ð8Þ
Additional criteria reflecting the quality of accomplished projects are associated with its immediate implementation by the customer. These criteria are significant, especially for information technology (IT) projects. Generally, it is expected that the implementation of large projects requires more effort than smaller project implementation. Positive outputs mean lower implementation expenses and shorter implementation durations. Thus, the following additional outputs are included in the DMSS: Output 3: Output 3 is defined as the ratio between the actual cost, excluding implementation expenses, and the implementation expenses of project i. Y 3i ¼
eðC i Þ lðC i Þ
ð9Þ
In this formulation, l(Ci) is the i -th project's implementation expenses. Output 4: Output 4 is defined as the ratio between the actual completion time, excluding the implementation duration, and the implementation duration of project i. Y 4i ¼
eðT i Þ l ðT i Þ
ð10Þ
In this formulation, l(Ti) is the i-th project's implementation duration. We assume that projects with larger outputs (such as those with higher Y1,i, Y2,i, Y3,i, Y4,i values) are better managed than projects with smaller outputs. The relevant data associated with each of these projects is presented in Table 1. The data includes the project lead-times
Y. Hadad et al. / International Journal of Project Management 31 (2013) 532–541
537
Table 1 The case study data. Project
Manager
Allocated budget (k$)
Actual cost (k$)
Lead time (weeks)
Completion time (weeks)
Implementation cost (k$)
Implementation time (weeks)
Cost stability
Time stability
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52
A A A A B B B B B C C C C C D D D D D D D E E E F F F F F G G H H H H H I I I I J J J J J J J K K K K K
156 462 271 133 429 437 149 144 186 112 201 541 397 374 477 502 163 507 509 290 394 305 209 313 272 641 461 510 270 441 489 368 141 422 147 118 223 261 409 381 535 268 575 386 332 510 187 479 116 365 556 286
198 811 365 202 520 492 150 245 206 200 329 515 465 418 842 548 311 541 601 390 588 343 312 566 310 718 728 556 253 479 791 585 248 612 169 202 261 387 607 458 626 306 566 471 369 692 348 535 143 496 745 375
20 48 26 13 37 39 13 15 17 11 17 41 35 39 43 48 15 52 41 26 35 28 22 28 26 39 48 48 28 39 43 37 13 37 16 20 24 24 41 35 52 27 47 39 30 48 33 41 15 33 52 26
22 43 37 17 50 46 17 22 24 26 26 41 52 54 52 43 20 43 39 24 48 28 17 37 35 37 41 41 22 41 56 35 24 54 20 20 26 26 37 50 74 26 43 52 33 54 28 48 15 41 61 35
33 114 66 32 93 85 29 37 26 41 51 118 78 77 146 126 58 98 110 59 99 48 49 78 64 117 119 99 55 93 127 101 38 94 25 31 41 57 116 82 118 49 119 70 59 118 48 107 22 78 125 69
7 20 11 4 15 15 4 7 4 9 7 19 13 15 20 20 5 15 15 9 15 7 7 11 11 15 17 15 7 11 20 13 8 15 7 7 7 8 15 15 22 9 17 15 11 20 7 16 5 14 20 11
8.826 7.310 8.258 6.954 8.985 8.446 8.244 11.403 11.710 17.212 9.381 11.751 12.422 7.776 6.485 7.331 7.236 6.353 8.045 9.699 8.606 10.142 7.386 12.331 6.702 11.198 6.394 7.513 6.592 6.402 6.636 11.442 14.472 6.707 7.764 7.513 9.881 10.788 7.305 12.392 8.757 6.423 12.121 6.575 10.616 11.390 10.977 12.285 9.328 10.941 10.225 10.194
10.811 12.121 8.857 8.787 10.101 7.077 10.215 10.718 7.225 11.723 9.328 10.122 11.976 11.038 7.862 7.072 6.645 9.794 10.929 7.342 7.179 6.935 9.643 8.097 8.673 8.518 9.425 6.357 6.575 7.657 8.292 7.849 11.601 6.448 11.848 7.924 7.305 7.289 9.662 8.811 9.579 7.358 9.823 10.650 6.693 6.887 8.210 11.468 11.468 8.190 6.285 10.730
and allocated budgets, adjustments, additional or deleted functionalities, negotiations, and so on. The projects are sorted first by manager, and then by their acceptance date. Detailed evaluations of activity time and cost probability distribution functions were not regularly collected for all fifty-two projects. Therefore, for our purposes the inputs of cost and time risks were calculated post factum. The cost values are in thousands of dollars (USD), and the amount of work in weeks.
4.3. The computed scores of the projects The inputs and outputs related to each of the projects are presented in Table 2. Table 2 includes the efficiency scores for all projects, as computed by DEA (CCR model), and their computed CE scores. Table 2 also shows that nineteen out of fifty-two projects are efficient and receive a score of 1. The project with the highest average normalized score was project #16.
538
Y. Hadad et al. / International Journal of Project Management 31 (2013) 532–541
Table 2 Project scores as computed by the CCR model and by the Cross-Efficiency (CE) ranking method. Project
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52
Manager
A A A A B B B B B C C C C C D D D D D D D E E E F F F F F G G H H H H H I I I I J J J J J J J K K K K K
Inputs
CCR
CE
X1,i
X2,i
X3,i
Outputs Y1,i
Y2,i
Y3,i
Y4,i
Efficiency
Score
Rank
8.826 7.310 8.258 6.954 8.985 8.446 8.244 11.403 11.710 17.212 9.381 11.751 12.422 7.776 6.485 7.331 7.236 6.353 8.045 9.699 8.606 10.142 7.386 12.331 6.702 11.198 6.394 7.513 6.592 6.402 6.636 11.442 14.472 6.707 7.764 7.513 9.881 10.788 7.305 12.392 8.757 6.423 12.121 6.575 10.616 11.390 10.977 12.285 9.328 10.941 10.225 10.194
10.811 12.121 8.857 8.787 10.101 7.077 10.215 10.718 7.225 11.723 9.328 10.122 11.976 11.038 7.862 7.072 6.645 9.794 10.929 7.342 7.179 6.935 9.643 8.097 8.673 8.518 9.425 6.357 6.575 7.657 8.292 7.849 11.601 6.448 11.848 7.924 7.305 7.289 9.662 8.811 9.579 7.358 9.823 10.650 6.693 6.887 8.210 11.468 11.468 8.190 6.285 10.730
0.128 0.104 0.096 0.098 0.086 0.089 0.087 0.104 0.091 0.098 0.085 0.076 0.088 0.104 0.090 0.096 0.092 0.103 0.081 0.090 0.089 0.092 0.105 0.090 0.096 0.061 0.104 0.094 0.104 0.088 0.088 0.101 0.092 0.088 0.109 0.170 0.108 0.092 0.100 0.092 0.097 0.101 0.082 0.101 0.090 0.094 0.177 0.086 0.129 0.090 0.094 0.091
0.788 0.570 0.743 0.658 0.825 0.888 0.993 0.588 0.903 0.560 0.611 1.051 0.854 0.895 0.567 0.916 0.524 0.937 0.847 0.744 0.670 0.889 0.670 0.553 0.877 0.893 0.633 0.917 1.067 0.921 0.618 0.629 0.569 0.690 0.870 0.584 0.854 0.674 0.674 0.832 0.855 0.876 1.016 0.820 0.900 0.737 0.537 0.895 0.811 0.736 0.746 0.763
0.909 1.116 0.703 0.765 0.740 0.848 0.765 0.682 0.708 0.423 0.654 1 0.673 0.722 0.827 1.116 0.750 1.209 1.051 1.083 0.729 1 1.294 0.757 0.743 1.054 1.171 1.171 1.273 0.951 0.768 1.057 0.542 0.685 0.800 1 0.923 0.923 1.108 0.700 0.703 1.039 1.093 0.750 0.909 0.889 1.179 0.854 1 0.805 0.853 0.743
0.212 0.247 0.244 0.241 0.217 0.195 0.195 0.257 0.140 0.366 0.254 0.218 0.197 0.206 0.306 0.251 0.356 0.193 0.216 0.203 0.251 0.157 0.234 0.249 0.235 0.183 0.258 0.194 0.204 0.211 0.260 0.275 0.270 0.223 0.170 0.263 0.184 0.218 0.284 0.215 0.221 0.183 0.207 0.181 0.178 0.231 0.257 0.223 0.190 0.214 0.225 0.241
0.350 0.417 0.423 0.308 0.405 0.385 0.308 0.467 0.235 0.818 0.412 0.463 0.371 0.385 0.465 0.417 0.333 0.289 0.366 0.346 0.429 0.250 0.318 0.393 0.423 0.385 0.354 0.313 0.250 0.282 0.465 0.351 0.615 0.405 0.438 0.350 0.292 0.333 0.366 0.429 0.423 0.333 0.362 0.385 0.367 0.417 0.212 0.390 0.333 0.424 0.385 0.423
0.6944 0.9676 0.8777 0.8417 0.9039 0.9824 0.9906 0.7434 0.8830 1 0.8806 1 0.7582 0.8829 1 1 1 1 1 0.9327 0.9657 0.8849 1 0.8445 1 1 1 1 1 1 1 0.9327 0.8876 1 0.8904 0.8784 0.8092 0.8522 1 0.8741 0.8862 0.9973 0.9269 0.9507 1 0.9844 0.8390 0.8460 0.6634 0.8705 1 0.8422
0.6084 0.7173 0.7428 0.6916 0.7228 0.8287 0.7180 0.5975 0.5865 0.6680 0.6810 0.8177 0.5737 0.6933 0.8965 0.9716 0.7957 0.8157 0.7919 0.7625 0.7937 0.6944 0.7505 0.6246 0.8507 0.8305 0.8011 0.8998 0.9263 0.8586 0.8602 0.6811 0.6083 0.8582 0.6931 0.6262 0.6685 0.6808 0.7867 0.6657 0.7261 0.8571 0.7411 0.7258 0.7703 0.7468 0.4801 0.6490 0.5865 0.7058 0.7692 0.6720
46 30 24 35 28 11 29 48 50 41 37 12 51 33 4 1 15 13 17 21 16 32 22 45 9 10 14 3 2 6 5 36 47 7 34 44 40 38 18 42 26 8 25 27 19 23 52 43 49 31 20 39
4.4. The ranking of the candidates Table 3 shows the rank of each project, according to each project manager, and each candidate's grade, calculated by averaging the scores of the candidate's past projects (Section 3 — Step 5). Table 3 is sorted by the grades of the candidates, and
within each candidate by the chronological sequence of his or her projects. Table 3 shows that candidate G was at the top of the list, candidate F was second, and candidate D was third. Pairwise comparisons between the candidates, according to the scores of their projects, were statistically tested (Section 3 —
Y. Hadad et al. / International Journal of Project Management 31 (2013) 532–541 Table 3 Manager grades. Manager
1
2
3
4
5
G F D J E B H A I C K
6 9 4 26 32 28 36 46 40 41 43
5 10 1 8 22 11 47 30 38 37 49
14 15 25 45 29 7 24 18 12 31
3 13 27
2 17 19
48 34 35 42 51 29
50 44
6
7
21 23
16 52
33 39
539
consideration, while neither G nor F was performed significantly better than D. Table 5 presents the candidates that would be included in the preferred candidate subgroup for various values of α. For α b 0.003, the preferred subgroup includes all of the candidates and for α N 0.428 it includes only candidate G.
Grade 5.50 7.60 12.43 25.71 33.00 33.20 33.60 33.75 34.50 34.80 36.40
5. Summary The matching of the right manager with the right project is crucial for the project's success. The attainment of a proper matching is very complicated because many factors should be taken in consideration. Most of the criteria for such matching are immeasurable, and the decision maker is required to use both intuition and common sense. It is obvious that subjective considerations cannot be avoided throughout the process of matching an adequate manager to a specific project. But at a minimum, the evaluation of candidates for managing projects can be supported by a DMSS based on measurable objective criteria. The role of the DMSS is to maintain the matching process where subjective considerations can be replaced by objective ones. However, the decisions throughout the process are not determined by any DMSS, which should support the decision maker but not replace him. Objective evidence about the candidates' professionalism in managing projects, as reflected by their past performances, may
Step 6). Table 4 presents the significance levels (using the MWW test, a one-tailed test) of these pairwise comparisons. The decision-makers set a significance level of α = 0.05 (Section 3 — Step 7.1). Candidates G, F, and D were included in the subgroup of preferred candidates (Section 3 — Step 7.2). Importantly, the proposed method ensures that any candidate selected from this subgroup does not demonstrate significantly lower performance than any other. In this implementation, the decision-makers selected candidate D as the preferred candidate for the new project. Although candidate D was third in the ranking, his personal skills were best suited to the project under
Table 4 Significance levels of the MWW test (a one-tailed test).
G F D J E B H A I C K
G
F
D
J
E
B
H
A
I
C
– 0.4286 0.2500 0.0278 0.1000 0.0476 0.0476 0.0667 0.0667 0.0476 0.0476
– 0.1010 0.0088 0.0179 0.0079 0.0278 0.0079 0.0079 0.0079 0.0400
– 0.0087 0.0083 0.0240 0.0240 0.0030 0.0061 0.0240 0.0025
– 0.2583 0.1338 0.1717 0.1152 0.2061 0.1338 0.1010
– 0.5000 0.3929 0.4286 0.5714 0.3929 0.5000
– 0.5000 0.5476 0.5476 0.3452 0.4206
– 0.3651 0.5476 0.5000 0.5000
– 0.4429 0.3651 0.3651
– 0.4524 0.3651
– 0.5000
Table 5 Preferred candidate subgroups as determined by significance levels. α
b .003
.003
.004–.006
.007
.008
.009–.024
.025–.101
.102–.428
N .428
Preferred candidate subgroup
G F D J E B H A I C K
G F D J E B H A I C
G F D J E B H
G F D J E B H
G F D J E
G F D
G F D
G F
G
H
H
I C
C
540
Y. Hadad et al. / International Journal of Project Management 31 (2013) 532–541
be very helpful in the selection of the ‘best’ manager from a group of candidates with similar professional backgrounds. This paper proposes a DMSS module that enables ranking the candidates on the basis of the relative success of their past projects, which are objectively scored according to objective inputs and outputs. The initial group of candidates evaluated by the proposed DMSS framework is limited to candidates with records in a common database, that is, candidates with a history of managing organizational projects. A new or outsider candidate cannot be included in the group that is ranked by the DMSS. That candidate should be compared subjectively against the candidate that was selected by the DMSS. In any event, demand for a significance level of the ranking results may point out several candidates that can be declared as best. Thereafter, subjective criteria will be used for the selection of the single winning candidate. The possible bias and the significance level of the ranking results are crucial for the reliability and the efficiency of the DMSS, and thus, on its impact on the project manager selection process. Therefore, outlier past projects must be eliminated from the data sample to be processed by the DMSS, and the status of their managers should be reconsidered. Moreover, data records from an organization with a more homogeneous project portfolio that was sampled throughout a stable organizational state, tend to produce more reliable ranking with improved significance levels. The proposed DMSS module is delimited to the aspect of candidates' proven performances. This module is based on objective criteria, that is, criteria with measurable values. The decision maker is free to select any measurable criterion as input or output required for scoring relative past project performances. Because the proposed DMSS module evaluates the performance outcomes and not the reasons for the obtainment of these outcomes, the criteria to be determine as inputs and outputs should be associated with the project and not with the candidates' qualifications. There is no answer yet to the question of what is the ideal set of inputs and outputs. Nevertheless, determination of inputs in the context of project risks is recommended because such criteria may reduce the result biases and improve the candidate ranking significance level. Further conclusions on the selection of the input and outputs should be validated in longitudinal research studies. Personal past performance records are informative measures for project manager selection, because they reflect most of the criteria that should be considered. Our DMSS module is a useful tool that analyzes these records for the decision making process. However, these records do not enable us to consider the candidacy of employees without a history of managing projects within the organization. Moreover, the proposed module does not consider the candidates' personal, management, and technical skills that are specifically required by the current project. Our proposal for future research is to establish a comprehensive framework for selecting a project manager. Such framework should integrate the proposed DMSS module with procedures, based on objective and subjective criteria as well, for analyzing the candidates' human, conceptual, organizational, and technical
skills, when weighted against the specific requirements of the project. References Adler, N., Friedman, L., Sinuany-Stern, Z., 2002. Review of ranking methods in the DEA context. European Journal of Operational Research 140 (2), 249–265. Asosheh, A., Nalchigar, S., Jamporazmey, M., 2010. Information technology project evaluation: an integrated data envelopment analysis and balanced scorecard approach. Expert Systems with Applications 37 (8), 5931–5938. Ballsteros-Perez, P., Gonzales-Cruz, M.C., Diego, M.F., 2012. Human resource allocation management in multiple projects using sociometric techniques. International Journal of Project Management http://dx.doi.org/10.1016/ j.ijproman.2012.02.005. Barros, M.O., Werner, C.M.L., Travassos, G.H., 2004. Supporting risks in software project management. Journal of Systems and Software 70 (1–2), 21–35. Cao, Q., Hoffman, J.J., 2011. A case study for developing a project performance evaluation. International Journal of Project Management 29 (2), 155–164. Chan, V., Wong, W., 2007. Outlier elimination in construction of software metric models. Proceedings of the 22nd ACM Symposium on Applied Computing, pp. 1484–1488. Charnes, A., Cooper, W.W., Rhodes, E., 1978. Measuring the efficiency of decision making units. European Journal of Operational Research 2 (6), 429–444. Cheng, M.-I., Dainty, A.R.J., Moore, D.R., 2005. What makes a good project manager? Human Resource Management Journal 15 (1), 25–37. Conway, G.R., 1989. Agroecosystem Analysis. Agricultural Administration 20 (1), 31–55. Eilat, H., Golany, B., Shtub, A., 2006. R&D project evaluation: an integrated DEA and balanced scorecard approach. Omega 36 (5), 895–912. El-Mashaleh, M.S., Rababe, S.M., Hyari, K.H., 2010. Utilizing data envelopment analysis to benchmark safety performance of construction contractors. International Journal of Project Management 28 (1), 61–67. El-Sabaa, S., 2001. The skills and career path of an effective project manager. International Journal of Project Management 19 (1), 1–7. Fortune, J., White, D., 2006. Framing of project critical success factors by a systems model. International Journal of Project Management 24 (1), 53–65. Ghapanchi, A.H., Tavana, M., Khakbaz, M.H., Low, G., 2012. A methodology for selecting portfolios of projects with interactions project success. International Journal of Project Management http://dx.doi.org/10.1016/ j.ijproman.2012.01.012. Golenko-Ginzburg, D., 1988. On the distribution of activity time in PERT. Journal of the Operational Research Society 39 (8), 767–771. Hadad, Y., Hanani, Z.M., 2011. Combining the AHP and DEA methodologies for selecting the best alternative. International Journal of Logistics Systems and Management 9 (3), 251–267. Hauschildt, J., Gesche, K., Medcof, J.W., 2000. Realistic criteria for project manager selection and development. Project Management Journal 31 (3), 23–32. Kelley, J.E., 1961. Critical path planning and scheduling: mathematical basis. Operations Research 9 (3), 296–320. Kelley, J.E., Walker, M.R., 1959. Critical path planning and scheduling. Proceedings of the Eastern Joint Computer Conference, pp. 160–173. Khorramshahgol, R., Azani, H., Gousty, Y., 1988. An integrated approach to project evaluation and selection. IEEE Transactions on Engineering Management 35 (4), 265–270. Laslo, Z., 2003. Activity time-cost tradeoffs under time and cost chance constraints. Computers and Industrial Engineering 44 (3), 365–384. Laslo, Z., Goldberg, A.I., 2001. Matrix structures and performance: the search for optimal adjustment to organizational objectives. IEEE Transactions on Engineering Management EM-48 (2), 144–156. Laslo, Z., Goldberg, A.I., 2008. Resource allocation under uncertainty in a multi-project matrix environment: is organizational conflict inevitable? International Journal of Project Management 26 (8), 773–788. Laslo, Z., Gurevich, G., 2011a. PERT-type projects: time-cost tradeoffs under uncertainty. Simulation 88 (11 or 12) http://dx.doi.org/10.1177/ 0037549711428341. Laslo, Z., Gurevich, G., 2011b. A Simulation-based decision support system for managing information technology project portfolios. International Journal of Information Technology Project Management 4 (3) in print.
Y. Hadad et al. / International Journal of Project Management 31 (2013) 532–541 Levy, H., Sarnat, M., 1995. Capital Investment & Financial Decisions. PrenticeHall, Englewood Cliffs, New Jersey. Lintnert, J., 1965. Security prices risk, and maximal gains from diversification. Journal of Finance 20 (4), 587–615. Liu, C.Y., Zheng, Z.Y., 1989. Stabilization coefficient of random variable. Biometrical Journal 31 (4), 431–441. Mahmood, M.A., Pettingell, K.J., Shaskevich, A.I., 1996. Measuring productivity of software projects: a data envelopment analysis approach. Decision Sciences 27 (1), 57–80. Muller, R., Turner, J.R., 2007. Matching the project manager's leadership style to project type. International Journal of Project Management 25 (1), 21–32. Natarajan, T., Rajah, S.R.A., Manikavasagam, S., 2011. Snapshot of personnel productivity assessment in Indian IT industry. International Journal of Information Technology Project Management 2 (1), 48–61. Ogunlana, S., Siddiqui, Z., Yisa, S., Olomolaiye, P., 2002. Factors and procedures used in matching project managers to construction projects in Bangkok. International Journal of Project Management 20 (5), 385–400. Roy, B., 1990. Decision-aid and decision-making. European Journal of Operational Research 45 (2–3), 324–331. Saaty, T.L., 1980. The Analytic Hierarchy Process, Planning Priority Setting Resource Allocation. McGraW-Hill book company, New York.
541
Seiford, L.M., 1996. Data envelopment analysis: the evolution of the state of the art (1978–1995). Journal of Productivity Analysis 7 (2–3), 99–137. Seo, Y.-S., Yoon, K.-A., Bae, D.-H., 2008. An empirical analysis of software effort estimation with outlier elimination. Proceedings of the 4th International Workshop on Predictor Models in Software Engineering (PROMISE '08). ACM, New York, NY, USA, pp. 25–32. Sexton, T.R., Silkman, R.H., Hogan, A.J., 1986. Data envelopment analysis: critique and extensions. In: Silkman, R.H. (Ed.), Measuring Efficiency: An Assessment of Data Envelopment Analysis. Jossey-Bass, San Francisco, CA, pp. 73–105. Vitner, G., Rozenes, S., Spraggett, S., 2006. Using data envelope analysis to compare project efficiency in a multi-project environment. International Journal of Project Management 24 (4), 323–329. Yang, L.-R., Huang, C.-F., Wua, K.-S., 2011. The association among project manager's leadership style, teamwork and project success. International Journal of Project Management 29 (3), 258–267. Zavadskas, E.K., Turskis, Z., Tamosaitiene, J., Marina, V., 2008. Multi-criteria selection of project managers by applying grey criteria. Technological and Economic Development of Economy 14 (4), 462–477. Zopounidis, C., Doumpos, M., 2002. Multicriteria classification and sorting methods: a literature review. European Journal of Operational Research 138 (2), 229–246.