In the US

6 downloads 6658 Views 222KB Size Report
of graduate programs in computer science in the. U.S. and Canada. ... conjecture that a subjective ranking of colleges of sci- ence or engineering would have ...
Faculty productivity is measured, tabulated, and assessed.

Computing Research Programs In the

U.S.

R O B E RT G E I S T, M A D H U C H E T U PA R A M B I L , STEPHEN HEDETNIEMI, AND A. JOE TURNER

I

N

1989 SCHNEIDER [3]

PROVIDED AN EVALUATION,

based on contributions to the research literature, of graduate programs in computer science in the

• Current students and faculty. Recently successful programs can have difficulty publicizing their success outside a narrow audience. Employment prospects for current students and recruiting prospects for current programs can be enhanced by an independent validation of increased program productivity.

U.S. and Canada. He considered contributions to 12 selected research journals over the period from

January 1985 through December 1987. We agree with

• Administrators. Budget cuts have become the norm for colleges and universities throughout the U.S., and program planning has taken on a crucial role. Independent measures of program success are valuable to the planning process.

the premises of his study—the primary purpose of this article is to provide an updated report based on an expanded set of journals over a longer time period. Periodic reports such as this can serve at least three groups:

• Prospective students. There are now more than 150 doctoral programs and more than 350 master’s programs in computing in the U.S. [1, 4]. Selecting a graduate program has become a formidable task, and it is very difficult to be admitted to the “topranked’’ or “well-known’’ programs. Reports such as this one help identify very good programs that can be considered alternatives to more well-known programs. 96

December 1996/Vol. 39, No. 12

COMMUNICATIONS OF THE ACM

There are two standard alternatives to evaluations such as this one, those based on subjective measures and those based on indirect measures. In 1993 we participated in a survey by U.S. News & World Report in which we were asked to rate on a scale of 1 (marginal) to 5 (distinguished) each of 106 doctoral programs in computer science in the U.S. These 106 programs each granted 5 or more doctorates in computer science during the years 1987–1991. The 10 (really 11) highest rated programs were listed in the March 22, 1993 issue of U.S. News & World Report [5]. This list is shown in Table 1. A more recent ranking (and one that is more prestigious in the research community) that uses a similar methodology was done by the National Research Council. A group of 221 computer science researchers ranked 108 computer science Ph.D. programs on various characteristics. The 108 programs were chosen on the basis of the average number of Ph.D. graduates dur-

ing a period that was essentially the same as for the U.S. News & World Report study, but it was also required that there be at least one graduate in 1990. The 36 highest rated programs were listed in the November 1995 issue of Computing Research News [2]. The top 10 programs from the NRC study are shown in Table 2. There are at least two difficulties with such evaluations. First, few evaluators are familiar with the strengths and weaknesses of more than 100 doctoral programs. Although we might expect a familiarity with and consensus on the very best programs, we would expect a wide disparity of opinion on the remainder. Second, we conjecture that a subjective ranking of colleges of science or engineering would have produced much the same list, (i.e., the ratings might well have been based on the perceived overall strength of the university or college housing the program). The March 22, 1993 ranking of graduate schools of engineering by U.S. News & World Report [5] and a March 20, 1995 ranking of graduate schools of engineering by U.S. News & World Report [6] support this conjecture: in each case, 8 of the top 10 engineering schools intersect with the top 10 computer science programs in Table 1. A second commonly used measure is research funding. This data is readily available and comparisons are easily made, but there are two serious drawbacks to the use of this measure. First, it is at best a means rather than an end, fertilizer tonnage rather than bushels harvested. The degree of correlation between means and end is difficult to judge without end data such as we provide here. Second, zealous use of funding as a measure of success can shift the status of funding from means to end, thereby encouraging scientists to leave science departments in favor of “sales’’ departments. While such a shift may be in the short-term interest of universities struggling with budget cuts, it is certainly not in the best long-term interest of either universities or science. We do not suggest that research publications are the only good measure of success. Instead, we offer a ranking based on research publications as a basis of comTable 1. Top 10 programs from U.S. News & World Report, 1993 [5]

Rank

Institution

1. 1. 1. 1. 5. 6. 7. 8. 8. 10. 10.

Stanford California, Berkeley Massachusetts Institute of Tech. Carnegie Mellon Cornell Illinois, Urbana Washington, Seattle Texas, Austin Wisconsin, Madison California Institute of Tech. Princeton

Avg. rating (1–5) 4.9 4.9 4.9 4.9 4.7 4.4 4.3 4.1 4.1 4.0 4.0

Table 2. Top 10 programs from National Research Council Ranking, 1995

1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Stanford Massachusetts Institute of Tech. California, Berkeley Carnegie Mellon Cornell Princeton Texas, Austin Illinois, Urbana Washington, Seattle Wisconsin, Madison

4.97 4.91 4.88 4.76 4.64 4.31 4.18 4.09 4.04 4.00

Table 3. The journals used in this study

2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17.

f ACM Transactions on Database Systems ACM Transactions on Programming Languages and Systems ACM Transactions on Graphics ACM Transactions on Information Systems ACM Transactions on Computer Systems ACM Transactions on Software Engineering and Methodology ACM Transactions on Modeling and Computer Simulation ACM Transactions on Networking ACM Transactions on Computer-Human Interaction IEEE Transactions on Computers IEEE Transactions on Knowledge and Data Engineering IEEE Transactions on Parallel and Distributed Systems IEEE Transactions on Pattern Analysis and Machine Intelligence IEEE Transactions on Software Engineering IEEE Transactions on VLSI Systems Journal of the ACM

parison with other rankings, in the hope that it will help overcome some of the disadvantages of the subjective rankings. Procedure

We examined every issue of the 17 archival research journals that are listed in Table 3, published during the period from January 1990 through May 1995. The list includes all of the research journals of the two major computing societies, the ACM and the IEEE Computer Society, and it is a superset of the collection used by Schneider [3] with the exception of Communications, which we have omitted. The list in Table 3 certainly does not comprise all of the high-quality computing journals. We also recognize many computing conference proceedings have attained a status equivalent to that of high-quality journals. However, it was not feasible to include all highquality journals and conference proceedings because 1) the number of publications would be too large to process by hand (and the data is not available in electronic form), and 2) there would be too much controversy as to which journals or conference proceedings were sufficiently high quality. Our goal was to select a representative collection of high-quality journals that COMMUNICATIONS OF THE ACM

December 1996/Vol. 39, No. 12

97

would be likely to represent relative publication activity in widely recognized areas of computing research. We believe that using all of the research journals of the two major computing societies provides the basis for a useful comparison of research publication activity. To compute the ratings on which the institutions were ranked, each research article in each journal issue was given a total weight of 1.0, which was then apportioned equally among all coauthors. Thus, for example, an article that was singly-authored, or where all authors were from the same institution, would result in a weighted publication count of one for the authors’ institution, while an article that had two authors from different institutions would result in a weighted publication count of 0.5 for each institution. Because we were interested only in institutional research activity, we

Table 4. Top 10 programs from Schneider, 1989 [3]

Rank

1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Weighted publication count 37.38 Purdue 35.32 Massachusetts Institute of Tech. 35.23 Southern California 35.12 Stanford 32.75 Illinois, Urbana 32.73 Carnegie Mellon 30.70 California, Berkeley 27.28 Toronto 25.94 Maryland, College Park 24.33 Cornell Institution

did not distinguish among the different departments at the institutions or attempt to find current addresses for the named authors. It would perhaps be useful to have Table 5. Top 100 programs in the U.S., based on institutional average publication counts per faculty publication count member. However, it is not feasible to Weighted Institution Rank do this because the data is not readily publication count available, and it would be essentially 1. University of Maryland, College Park 68.60 impossible to obtain accurate counts 2. Massachussetts Institute of Technology, Cambridge 66.70 of the number of research faculty at 3. University of Illinois, Urbana/Champaign 63.90 each of 100 institutions during each of 4. University of Michigan, Ann Arbor 49.10 the years for which publication counts 5. University of Texas at Austin 47.00 6. Carnegie Mellon University, Pittsburgh, PA 46.50 were obtained. Therefore, only the 7. Stanford University, Palo Alto, CA 44.80 total weighted publication counts by 8. University of Wisconsin-Madison 42.10 institution were compiled. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40.

98

University of Southern California, Los Angeles Purdue University, West Lafayette, IN University of California, Berkeley University of Massachusetts, Amherst University of California, Santa Barbara Ohio State University, Columbus University of Washington, Seattle University of Minnesota, Minneapolis University of California, Los Angeles Pennsylvania State University, University Park Georgia Institute of Technology, Atlanta University of California, Irvine Cornell University, Ithaca, NY Princeton University, NJ New York University, NY University of Illinois at Chicago The University of Arizona, Tuscon University of California, San Diego Texas A & M University, College Station University of Pittsburgh, PA SUNY at Stony Brook, NY Rutgers University, New Brunswick, NJ Columbia University, New York, NY Michigan State University, East Lansing Yale University, New Haven, CT University of Florida, Gainesville University of Southwestern Louisiana, Lafayette University of Colorado, Boulder University of Pennsylvania, Philadelphia Rice University, Houston, TX Duke University, Durham, NC Louisiana State University, Baton Rouge

December 1996/Vol. 39, No. 12

COMMUNICATIONS OF THE ACM

40.10 39.00 37.60 35.80 28.10 26.70 26.00 25.60 23.80 21.90 21.40 20.20 19.80 19.40 19.00 18.80 17.60 16.80 16.40 16.30 15.70 15.60 15.40 15.00 14.30 14.00 14.00 13.80 13.80 13.70 11.60 11.30

Results

In Table 5 we show our ranking of the top 100 research programs in computing in the U.S., based on the total weighted publication count for each institution from the journals that were used in the study. The top 10 programs in this table agree substantially with the top 10 in the 1995 NRC ranking [2], the top 10 in the 1993 survey of U.S. News & World Report [5], and the top 10 in the 1989 ranking of Schneider [3], shown in Table 4. The reason for including only U.S. institutions in Table 5 is that the journals chosen for the study are more likely to be primary journals for U.S. researchers than for researchers in other countries, with the possible exception of Canada. Additionally, the two subjective rankings that are used for comparison include only U.S. institutions. A complete ranking of all institutional contributors to the named journals over the identified period can be accessed from the Web page at http://

Table 5. (continued) Top 100 programs in the U.S., based on institutional publication count

Rank 41. 42. 43. 43. 45. 46. 47. 48. 49. 50. 51. 51. 53. 54. 55. 56. 56. 58. 59. 59. 61. 61. 63. 63. 65. 65. 67. 68. 69. 70. 71. 72. 72. 74. 74. 74. 74. 78. 79. 79. 81. 82. 82. 84. 85. 85. 87. 88. 89. 90. 90. 92. 93. 93. 95. 96. 97. 97. 99. 100. 100.

Institution

www.cs.clemson.edu/~rmg/homepage. html.

Weighted publication count

University of California, Davis Naval Postgraduate School, Monterey, CA Clemson University, SC SUNY at Buffalo, NY Brown University, Providence, RI University of California, Santa Cruz Case Western Reserve University, Cleveland, OH Syracuse University, NY Virginia Polytechnic Institute & State University, Blacksburg University of Texas at Dallas, Richardson University of Iowa, Iowa City University of Virginia, Charlottesville Boston University, MA University of North Carolina, Chapel Hill University of Houston, TX Harvard University, Cambridge, MA Johns Hopkins University, Baltimore, MD George Mason University, Fairfax,VA University of Tennessee, Knoxville University of Utah, Salt Lake City College of William and Mary, Williamsburg,VA Polytechnic University, Brooklyn, NY Southern Methodist University, Dallas, TX Washington State University, Pullman California Institute of Tech., Pasadena Colorado State University, Fort Collins SUNY at Albany, NY Washington University, St. Louis, MO Rensselaer Polytechic Institute, Troy, NY University of Rochester, NY North Carolina State University , Raleigh Northeastern University, Boston University of Central Florida, Orlando Brigham Young, Provo, UT Northwestern University, Evanston, IL Old Dominion University, Norfolk,VA University of Rhode Island, Kingston Iowa State University, Ames George Washington University, Washington, DC New Jersey Institute of Technology, Newark Vanderbilt University, Nashville, TN Portland State University, Portland, OR University of Notre Dame, South Bend, IN University of Wisconsin-Milwaukee University of Texas at San Antonio Wayne State University, Detroit, MI University of South Florida, Tampa University of Connecticut, Storrs University of Oregon, Eugene Miami University, Oxford, OH University of Alabama, Tuscaloosa University of Kentucky, Lexington Oregon State University, Corvallis University of Delaware, Newark University of New Mexico, Albuquerque University of Maryland, Baltimore County, Baltimore Oregon Grad. Institute of Sc. and Tech., Portland Wright State University, Dayton, OH CUNY, NY Brandeis University, Waltham, MA Kent State University, OH

10.60 10.30 9.80 9.80 9.50 9.00 9.30 8.80 8.60 8.50 8.30 8.30 8.00 7.60 7.50 7.00 7.00 6.90 6.80 6.80 6.60 6.60 6.30 6.30 6.20 6.20 6.10 5.80 5.70 5.40 5.30 5.20 5.20 5.00 5.00 5.00 5.00 4.80 4.70 4.70 4.60 4.30 4.30 4.20 4.00 4.00 3.80 3.70 3.60 3.50 3.50 3.40 3.30 3.30 3.20 3.10 3.00 3.00 2.90 2.80 2.80

Conclusion

We have provided a quantitative evaluation of research programs in computing in the U.S. It is based on contributions to 17 representative research journals over the period from January 1990 through May 1995. While we believe this information could be useful to prospective students and others, we emphasize that we have attempted to measure only one component of a successful program, and that we offer our results only as an additional data point to be used with other rankings and other sources of information. To suggest that our ranking might be used as the sole ranking of overall program quality would almost certainly be a mistake. C References 1. Computing Research News 7, 2 (Mar. 1995). 2. Computing Research News 7, 5 (Nov. 1995). 3. Schneider, G.M. A quantitative evaluation of graduate programs in computer science in the United States and Canada. ACM SIGCSE Bulletin 21, 4 (1989), 20–24. 4. Turner, A. J. U.S. degree programs in computing. Computing Professionals: Changing Needs for the 1990s. National Academy Press, 1993. 5. U.S. News & World Report (Mar. 22, 1993), 68–76. 6. U.S. News & World Report (Mar. 20, 1995), 102. ROBERT GEIST ([email protected]) is a professor of computer science at Clemson University. MADHU CHETUPARAMBIL (madhuc@ transarc.com) is a master’s-level student of computer science at Clemson University and a member of the technical staff at Transarc Corporation. STEPHEN HEDETNIEMI ([email protected]. edu) is Professor and Chair of the Department of Computer Science at Clemson University. A. JOE TURNER ([email protected]) is a professor of computer science at Clemson University. Permission to make digital/hard copy of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage, the copyright notice, the title of the publication and its date appear, and notice is given that copying is by permission of ACM, Inc. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior specific permission and/or a fee.

© ACM 0002-0782/96/1200 $3.50

COMMUNICATIONS OF THE ACM

December 1996/Vol. 39, No. 12

99