USASBE 2008 Proceedings - Page 0761
INVESTMENT DECISION MAKING, CONSENSUS AND “DECISION STICKINESS” IN THE GRANT APPLICATION REVIEW PROCESS: AN EMPIRICAL STUDY OF TECHNOLOGY-BASED SMALL ENTERPRISE
Craig S. Galbraith University of North Carolina Wilmington Department of Management and Marketing Cameron School of Business 601 South College Road Wilmington, NC 28403 Tel: (910) 962-3775 Fax: (910) 962-3815 e-mail:
[email protected] Alex F. DeNoble San Diego State University College of Business Administration Department of Management
Sanford B. Ehrlich San Diego State University College of Business Administration Entrepreneurial Management Center
USASBE 2008 Proceedings - Page 0762
ACADEMIC ABSTRACT
This research examines the development of consensus when evaluating small business applicant presentations during a second round screening process for Department of Defense related funding. Drawing on literature from decision dilemma research, we examined whether a second round screening process resulted in greater consensus among panel members and to what extent those panel members “stuck” to their evaluations over time. In general, we found that expert panel members can reach consensus and that technical experts appear to be the most “sticky” in their pre and post assessments. Our results offer implications for both early stage entrepreneurs and entrepreneurship researchers. EXECUTIVE SUMMARY Predicting the future success of early to mid-stage technologies is one of the most fundamental challenges confronting those involved in making resource support decisions. Accordingly, this research examines the development of consensus when evaluating small business applicant presentations during a second round screening process for Department of Defense related funding. Drawing on literature from decision dilemma research, we examined whether a second round screening process resulted in greater consensus among panel members and to what extent those panel members “stuck” to their evaluations over time. Decision dilemma research suggests that individuals will tend to increase cooperation and build consensus following group discussion about the dilemma. However, participant personalities, and discussion environments, both can have an important moderating effect on generating consensus from group discussions. We examined the technology assessment process used by a congressional-funded Department of Defense technology transfer agency and collected pre and post expert assessment evaluations from participating panels of experts in the final stages of the screening process. We found that while 2nd stage technology application reviews involving panel presentation and discussions do lead to greater consensus regarding the grant decision, the consensus process appears to converge around the original assessments of the technical reviewers. This exploratory study offers implications for both practitioners and researchers. Entrepreneurs seeking funding would be well-advised to present a solid case that can stand up to the challenging queries that can be expected from technical experts. Thus, it is critical that someone from the organization with a deep technical background be present for such panel discussions. From a research perspective, this exploratory project links the equity funding literature from entrepreneurship research to the psychological literature on decision dilemma research. We believe that this study will serve as a basis for additional inquiry and exploration in this area.
USASBE 2008 Proceedings - Page 0763
INTRODUCTION Predicting the future success of early to mid-stage technologies is one of the most fundamental challenges confronting those involved in making funding and other resource support decisions. Both public and private organizations all partake in this exercise. This process of selecting appropriate investment opportunities typically involves several distinct stages, each in sequence. Formal reviews are made by equity investors when reviewing technology-based business plans, university technology transfer offices when deciding whether to invest in patent and commercialization activities, corporate R&D departments as part of their stage-gate product development procedures, and by granting agencies when reviewing R&D and Small Business Innovation Research (SBIR) and other proposals (Cooper, 1998; Ajamian & Koen, 2002; Ozer, 1999; Linton et al.., 2002). In the case of angel investment groups, for example, the first stage typically involves reviewing a number of written business plans or complex executive summaries that have been submitted to the angel group. This initial review process is oftentimes performed by a screening committee. Several researchers have studied this initial screening process. With respect to venture capital reviews, for example, Shepard and Zacharakis (2002 ) found that a business plan will often be rejected within the first five minutes of discussion by the screening committee. However, even though we have a somewhat better understanding of this early first stage screening process, there is still little evidence that it provides any predictive capability. Accordingly, this research specifically examines the development of consensus when evaluating small business applicant presentations during a second round screening process for Department of Defense related funding. BACKGROUND Galbraith et al. (2007) found that experts generally provided little or no additional predictive ability beyond a simple structural model of the technology. In other words, a reasonably accurate model of technology success could be built from organization variables, such as size of the firm, level of diversification, and stage of technology development, and that the screening committee “expert” ratings added no incremental predictive value on the average. The only group of experts that consistently provided some predictive power, albeit small, were technical personnel. Venture capitalists, commercialization consultants, and program administrators showed no capability to predict future success. (Galbraith et al., 2006). In a follow-up study, DeNoble et al. (2007) found that experts particularly were bad at managing Type II errors; that is, recommending investments in technologies that later proved to be failures. These findings tend to challenge the whole validity of early stage screening processes. Normally, however, the screening process for investors, whether by angel investors or grant funding agencies, proceeds through another stage after the initial screening process. Those applicants that survive the first stage are typically invited to subsequently present their plans to a larger group. In the case of angel investors, this presentation might be during the angel group’s monthly meeting – and several applicants might present at the same time, each seeking equity
USASBE 2008 Proceedings - Page 0764
investment. In the case of grant applications, the applicants also are often asked to present before a panel of experts, with a number of applicants presenting their proposals at the same time. Regardless of the setting, this second stage presentation of early investment opportunities typically involves a formal oral presentation, then a Q&A period, followed by some sort of an informal or formal vote by the panel or investors. A number of researchers have provided general descriptions of these early stage investment decision discussion processes. However, most of these descriptive studies of investment discussions have been subjective in nature involving small sample sizes, and few, if any prior research has objectively measured the perceptual changes during these second stage presentations. Decision dilemma research has generally found that individuals will tend to increase cooperation and build consensus following group discussion about the dilemma (see Hopthrow and Hulbert, 2005). Not surprisingly, researchers have found a number of moderating elements. Participant personalities, as well as discussion environments, both have an important moderating effect on generating consensus from group discussions (Thatcher and De La Cour, 2003). Time pressure tends to encourage group members to focus on a restricted range of cues, and incorporate completion as one of the relevant group interaction tasks (see, Kelly and Loving, 2004). Gender has also been found to be an important moderating variable. In group consensus decision making, for example, Bonito and Lambert (2005) found that the effect of gender was moderated differently by interaction with other participants at higher and lower information similarity. As previously mentioned, decision dilemma research has generally found that individuals will tend to increase cooperation and build consensus among group members following discussion about a dilemma. Most of this research, however, has been experimental in nature, addressing artificial experimental dilemmas. In addition, there has been little or no large scale research related to screening committee consensus development with an actual screening committee’s investment decisions. Thus, two research questions are examined in this project. Research Question 1: Does the process of second round screening result in greater consensus among panel members? Research Question 2: Are screening panel members with certain professional backgrounds more “sticky” in their evaluations over time? METHODS Sample This research study examined technologies assessed by a congressional-funded Department of Defense (SPAWAR) technology transfer agency (Center for Commercialization of Advanced Technologies, CCAT) that has a specific mission of funding technologies being developed by small private sector R&D firms, government and defense research agencies, and university laboratories that have a homeland security application. In addition, due to the Department of Defense’s interest in supporting market sustainable technologies, there is particular interest in technologies that have a “dual use” capability in the commercial market. For example, a
USASBE 2008 Proceedings - Page 0765
waterborne bio-warfare agent detection technology may also have a much broader commercial application as a stand-off water quality monitoring device. Proposals for funding are received and reviewed approximately three times per year by this particular agency. The funding process goes through the two stage screening process as described above. The first stage is a review and ranking of the detailed written applications by a panel of between three and six experts in the field. This process eliminates approximately 80% of the original applications. Since the primary mission of this particular funding agency is commercialization of technology, the funding application is modeled after, and very similar to the detailed business plan summaries often presented to angel and VC equity investors. The application includes sections on the technology, target market description, strategies for commercialization, previous funding (grant, equity, etc.), competition, management and scientific team backgrounds, anticipated milestones and barriers, etc. The second stage involves an invited presentation to another panel of experts, usually between 5 and 8 in number. This session involves a formal presentation, followed by a Q&A period, a discussion period by the panel, which in turn is followed by a final vote by the experts – again, a process very typical of the business plan screening process (pre-due diligence) undertaken by many angel investment groups. Approximately 50% of these applicants are then funded based upon the final panel vote. Total funding per successful application ranges between $75,000 and $125,000. Between 2001 and 2007 approximately 700 proposals were received and reviewed by the agency, with about ten percent of the applications being funded. Around seventy percent of both the received and funded proposals were from small firms, with the remaining split between university and defense research laboratories. The sample for this study consists of thirteen technologies that proceeded to the second stage screening process in 2005, and sixteen technologies that proceeded to the second stage screening process in 2006. The technologies are considered to be highly advanced solutions, generally in early to middle stages of technology development, ranging between proof-of-concept to lab prototype,. Funding is specifically targeted to advancing the technology to the next phase of development. All but one of the technologies presented and analyzed in this study were from small emergent companies, the single exception was a technology from a Department of Defense R&D laboratory. In all cases, the technology presented was the core technology being developed by the firm. The technologies presented to this agency are classified as biometrics, communications, computers, electronics, life sciences, materials, photonics, and sensors, and can be considered highly advanced post 9-11 technologies. The technologies presented in the two screening rounds, and analyzed for this study, were “Through Wall Organism Detection,” “Bioassay Control,”, Biometric-based Tokens,” “Beryllium Detection,” “Fuel Cell Membranes,” Converser for Healthcare,” “Enhancement of Vaccinations,” “MEMS Based Chemical Sensors,” Infrared Laser Lens,” “Wearable Flexible Displays,” “Radiation Detection” “Health Information Sharing,” “Laser Wave Mixing,” “3-D Face Recognition,” “Continuous Biological Monitoring,” “Digital Video for 1st Responders,” “In-hospital Clinical Evaluation Software,” “Advanced
USASBE 2008 Proceedings - Page 0766
Modulators,” “Card-based Perimeter Control for 1st Responders,” “Fluidic Zoom Lens for Miniature Cameras,” “Ultrasensitive Accelerometer,” “Micropower Source,” “Miniature Biological Monitoring Unit,” “Intermodal Container Tracking,” “Suitcase TOF,” “Near Eye Optics,” “Electrochromic Visors,” and “Handheld Imagers. All of the technologies were considered to have dual-use capabilities for both government and commercial sectors. As mentioned above, each of the analyzed technologies was invited to the second round of screening, involving presentations and discussion after “scoring high” on the initial screening process based solely on the written application. The presentations of applicants ran over several days, with each proposal allocated approximately 1 hour. Different screening panels of domain experts were formed based upon the nature of the technology. All reviewers, regardless of their professional background, had substantial experience with technology-based ventures. Not only did each reviewer report their expertise in a particular domain of technologies (such as vaccine development or materials engineering or robotics) prior to being asked to sit on a screening panel, but after receiving the application the reviewers were asked to exclude themselves from the evaluation of any technology they did not feel qualified to evaluate. All the reviewers held either masters or doctorate degrees and where categorized as venture capitalists, active technical scientists/engineers, technology commercialization consultants, active entrepreneurs, and program administrators. Expert Assessment Variables Prior to the second round presentation, the panel of domain experts formally evaluated each of the received proposals on a number of dimensions defined by the grant review process. As described above, the pre-presentation review was therefore based solely on the detailed “business plan-like” application form. We used an eight item questionnaire which included items related to reasonableness of request, technical merit, commercial potential, ability to sustain competitive advantage, and ability of project team to execute plan, which directly correspond to the key dimensions reported by Heslop et al.., (2001) and Astebro (2004). Reviewers used an 11- point Likert scale to score each variable. For this analysis we summed the scores of the eight items, thus the possible lowest score could be “0” and the possible highest score could be “80”. Scoring was done individually and independent of each other. After the presentations, Q&A, and discussion period the reviewers again evaluated the technology using the same instrument; thus we had a pre- and post-presentation metric by reviewer for each technology. The post-presentation/discussion scoring was also done individually and independent of each other. For reference, for the 2006 screening panel, the average of all pre-presentation scores was 59.7, while the average of all post-presentation scores was 60.3. Thus the presentation, Q&A, and panel discussion did not significantly change the total average score, either positively or negatively, for the full data set; however, significant changes in the average scores of the individual technologies by the panel members were noted.
USASBE 2008 Proceedings - Page 0767
RESULTS To examine research question 1, the range and standard deviation of the pre- and postpresentation/discussion panel assessment scores were calculated. Pre-presentation/discussion assessment scores were then subtracted from the post-presentation/discussion assessment scores; with a negative value representing a higher level of consensus after the presentation, Q&A, and discussion period. Table 1 provides the results of this analysis for the 2006 screening panel. Of the sixteen technologies reviewed, thirteen resulted in increased consensus (both range and standard deviation of individual scores decreased) from the presentation, Q&A, and panel discussion. Two technologies experienced decreased consensus, and one technology had mixed results. From the 2006 panel, clearly it appears that consensus increases after the 2nd round screening panel process. For the 2005 screening panel (not shown on table), we found slightly less dominant results. Of the thirteen technologies investigated, only seven resulted in clear increased consensus, three resulted in clear decreased consensus, and three had mixed results (score range decreased, but score standard deviation increased). Combining the 2005 and 2006 panels, of the twenty-nine technology 2nd stage screening panels investigated, twenty (68.9%) resulted in increased consensus among the expert panel members, while five (17.2%) resulted in decreased consensus among the expert panel members. Overall, the results appear to coincide with previous decision dilemma research that indicates that individuals will tend to build consensus following group discussion about the dilemma.
USASBE 2008 Proceedings - Page 0768
TABLE 1 Panel Consensus – 2006 Panel
Tech 1: Range SD Tech 2: Range SD Tech 3: Range SD Tech 4: Range SD Tech 5: Range SD Tech 6: Range SD Tech 7: Range SD Tech 8: Range SD Tech 9: Range SD Tech 10:Range SD Tech 11:Range SD Tech 12:Range SD Tech 13:Range SD Tech 14:Range SD Tech 15:Range SD Tech 16:Range SD
Pre-presentation 43.0 17.03 41 15.67 23 9.30 39 14.03 30 10.48 25 11.89 24 11.35 18 9.01 22 10.29 29 19.47 17 25.12 22 18.14 42 18.89 20 7.72 31 13.68 28 13.01
Post-presentation 20 7.98 34 12.38 11 4.54 32 12.11 24 8.28 45 18.55 11 5.06 24 10.59 23 9.94 16 6.75 7 3.30 19 8.44 32 15.21 15 5.41 21 7.90 22 9.08
Consensus Increased Increased Increased Increased Increased Decreased Increased Decreased Mixed Increased Increased Increased Increased Increased Increased Increased
The second research question investigated whether there were moderating effects related to the professional background of the expert panelists. In other words, are expert panel members with certain professional backgrounds more “sticky” in their decisions? For this analysis only the 2006 panel data was analyzed. Professional background’s of the expert reviewers was classified as “Active Technical” (current lab scientist or engineer), “Active Entrepreneur” (currently senior management in a small emergent high technology firm), “Venture Capitalist” (currently senior manager in a VC fund or
USASBE 2008 Proceedings - Page 0769
an active angel investor), and “Consultant/Administrator” (Currently a consultant or a program administrator with a government agency). “Decision stickiness” was analyzed two different ways. First, for each category of professional background, the percentage shift from the original pre-presentation/discussion score was analyzed. Second, for each category of professional background, the percentage of times they were “most sticky,” that is there was the smallest shift (compared to the other professional categories) from the original pre-presentation/discussion score for a particular technology, and the percentage of times they were, “least sticky,” that is there was the largest shift (compared to the other professional categories) from the original pre-presentation/discussion score for a particular technology. This results in a ranking of “decision stickiness” for each of the professional categories. Table 2 presents this analysis. From Table 2, it appears that technical experts and equity investors were most “decision sticky” in their assessments, that is, they deviated least from their original assessments. On the other hand, consultants and administrators were the least “decision sticky” in their assessments. When analyzing the ranking of “decision stickiness” by each technology’s presentation/discussion, we note that active technical panelists were the most “decision sticky” (least deviation from original scores) in seven of the twelve (58.33%) presentation/discussions they participated, and least “decision sticky” (most deviation from original scores) in only one presentation/discussion. On the other hand, consultants and administrators were the most “decision sticky” in only two of the sixteen (12.50%) presentation/discussions they participated, and least “decision sticky” in nine presentation/discussions (56.25%). Venture capitalists were least “decision sticky” in 18.18% of the presentations/discussions while entrepreneurs were least “decision sticky” 31.25% of the presentations/discussions. TABLE 2 Panel Member “Stickiness – 2006 Panel
Professional Background
Active Technical Active Entrepreneur VC/Angel Investor Consultant and Administrator
Percent “most sticky” in a particular discussion
Percent “least sticky” in a particular discussion
6.53% 10.68% 5.33%
58.33% 18.75% 27.27%
8.33% 31.25% 18.18%
16.75%
12.50%
56.25%
Percent deviation from prediscussion score
Caution must be taken in interpreting our results. Overall, while the sample size of experts and technologies that we analyzed is larger than most, if not all of previous research that has looked at real investment decision making for small emergent technology-based firms, our sample size was still limited. For example, within the total 2006 panel process there were six active technical
USASBE 2008 Proceedings - Page 0770
personnel (most had doctorates), three entrepreneurs, six consultants and administrators, and four equity investors. But taken together, our results regarding both consensus (research question 1) and “decision stickiness” (research question 2) leads to an intriguing conclusion – while 2nd stage business plan/technology application reviews involving panel presentation and discussions do lead to greater consensus regarding the investment/grant decision, the consensus process appears to converge around the original assessments of technical expert reviewers, and in some cases, actual equity investors. Thus consensus appears to be moderated by the professional background of the reviewers. Our results are particularly relevant in suggesting how the membership of grant application review panels, private equity investor screening committees, and product development stage-gate groups may affect the decision making process, and ultimately result in selecting certain investment opportunities over others. Clearly additional research needs to be accomplished in the area. For example, each of the 29 technologies presentations/discussions analyzed in this research was video-taped. Analysis of these video tapes using SYMLOG may reveal the process as to how consensus is built, and how it converges around particular individuals within the review panel. SO WHAT? Raising capital to support early stage technology development is a high stakes game for would-be entrepreneurs with a vision for commercializing their technology-based ideas. From this perspective, it is important to better understand the process by which funding decisions are made by equity investors, grant agencies and corporate R&D departments. In almost all of these cases, successful applicants must pass through a multi-stage evaluation process which includes a prescreen of written applications or business plans and culminates with a competitive presentation in front of a panel of “expert” decision makers. This exploratory research suggests that technical experts who form initial impressions about a technology tend to be the most difficult to sway to a different perspective on the basis of an oral presentation. However, we did note that expert panelists as a whole, when given the opportunity to discuss a proposal with peers from different backgrounds, can be convinced to adopt a different point of view. Thus, the importance of the oral presentation in the overall funding decision process is underscored. Applicants seeking funding for early stage technology development would be well advised to present a solid case that can stand up to the challenging queries that can be expected to come from technical experts. This notion is important for applicant companies when deciding who in the organization should make the presentation. If a business development director is serving as the company spokesperson in this regard, he or she should either be well versed in the underlying technology or should have a technical support person in attendance at the presentation to field such questions. From a research perspective, this exploratory project links the equity funding literature from entrepreneurship research to the psychological literature on decision dilemma research. We believe that there is great opportunity to further our understanding of the foundations of decision dilemmas as they might be applied to entrepreneurial challenges. We hope that this initial study will serve as a basis for additional inquiry and exploration in this area.
USASBE 2008 Proceedings - Page 0771
REFERENCES Ajamian, G. and Koen, P.A. 2002. Technology stage gate: A structured process for managing high risk, new technology projects, In P. Belliveau, A. Griffin, and S. Sorermeyer (Eds). PDMA Toolbook for New Product Development. New York: John Wiley and Sons, 267 - 295. Astebro, T. 2004. Key success factors for technological entrepreneurs’ R&D projects. IEEE Transactions on Engineering Management, 51(3): 314-321. Bonito, J., and B . Lambert 2005. Information similarity as a moderator of the effect of gender on participation in small groups: A multilevel analysis. Small Group Research, 36: 139-165. Cooper, R., Edgett, S., and Kleinschmidt, E. 2002. Optimizing the stage-gate process: What best practice companies are doing - Part I. Research-Technology Management, 45 (5): 21-27. DeNoble, A., S. Ehrlich, and C. Galbraith 2007. Forecasting the success of early stage technologies developed by small enterprises. Paper presented at the 2007 USASBE national conference, Florida, 2007. Galbraith, C., A. De Noble,, S. Ehrlich, and D. Kline, 2007, Can Experts Really Assess Future Technology Success? A Neural Network and Bayesian Analysis of Early Stage Technology Proposals, Journal of High Technology Management Research 17, 125-137. Galbraith, C., S. Ehrlich, and A. De Noble, 2006, Predicting Technology Success: Identifying Key Predictors and Assessing Expert Evaluation for Advanced Technologies, Journal of Technology Transfer 32(1), 673-684. Heslop, L., McGregor, E., and Griffith, M. 2001. Development of a technology readiness assessment measure: The cloverleaf model of technology transfer. Journal of Technology Transfer, 26: 369-384 Hopthrow, T., and L. Hulbert 2005. The effect of group decision making on cooperation in social dilemmas. Group Processes & Intergroup Relations. 8(1): 89-100. Kelly, J., and T. Loving. 2004. Time pressure and group performance: Exploring underlying prcesses in the attentional focus model. Journal of Experimental Social Psychology, 40(2) 185198. Linton, J., Walsh, S., and Morabito, J. 2002. Analysis, ranking and selection of R&D projects in a portfolio. R&D Management, 32 (32): 139 – 148. Ozer, M. 1999. A survey of new product evaluation models. Journal of Product Innovation Management, 16: 77-94. Shepard, D., and A. Zacharakis, 2002. VCs Decision Processes: Evidence Suggesting More Experience May Not Always Be Better. Journal of Business Venturing, 18, 381-401.
USASBE 2008 Proceedings - Page 0772
Thatcher, A., and A. De La Cour, 2003. Small group decision-making in face-to-face and computer-mediated environments: The role of personality. Behavior and Information Technology, 22(3), 203-118.