Individual creativity and team engineering design: a ...

3 downloads 0 Views 247KB Size Report
score well in the creativity potion of the competition The current work in progress examines the composition of design teams to develop a taxonomy based on a) ...
Individual creativity and team engineering design: a taxonomy for team composition Kylie Berger, Andrea Surovek, Dean Jensen, South Dakota School of Mines and Technology Rapid City, SD and David Cropley, University of South Australia

Abstract—In 2012, the authors developed a competition called “Design Wars,” an all-day team design competition modeled on reality television competitions such as Project Runway or Junkyard Wars. The purpose of the competition was to test the student teams’ abilities to develop creative solutions to an open ended problem given limited time and unconventional resources. One question that arose while developing the competition was the impact that individual creativity and team creative composition would have on the final product. The results were somewhat surprising; teams with the highest average scores of individual creativity performed the worst in judging of the creativity of the final product. While the reasons for this are unclear, the most likely reasons were a) lack of time to incubate creative ideas b) an inability to articulate creative ideas physically in the constructed project or c) a limit in the ability of creative, or divergent, thinkers to converge upon a single solution. Additionally, teams that had a large range of individual creativity scores, or more specifically, a single dominant “creative” thinker, also failed to score well in the creativity potion of the competition The current work in progress examines the composition of design teams to develop a taxonomy based on a) average team scores of individual team member creativity and b) range of individual creativity on a team. Based on the Innovation Phase Model (IPM), different phases of innovation emphasize different dominant thinking processes. The IPM includes the stages of Invention (preparation, activation, generation, illumination, verification) and Exploitation (communication and validation). Teams that excel in invention may not excel in exploitation. Alternatively, some teams may actualize their ideas very well, but may not have the same ability to generate fluency and flexibility and novelty in their solutions. By categorizing team characteristics, we can examine if specific team compositions are more likely to perform well in specific phases of the IPM. The current focus is on the divergent stages of Invention including activation (problem definition) and generation (fluency in solutions) as well as the convergent stage of illumination (reduction to a few promising solutions.) The overall objective will be to develop strategies specific to team composition that improve performance in all phases of innovation. Keywords—creativity, innovation, competition, , team design

I.

self-assessment,

design

INTRODUCTION

In 2012, 16 teams of 4 engineering students competed in an all-day, team design event called Design Wars. The details of the event can be found in Oihus et al (2103), but the major components can be summarized as follows: • Teams were divided into 2 groups - “build floor” and “design office” to require distance teaming



• • •



The design problem was simple and open ended – get a tennis ball from a small blue disk to a basket initially elevated 5 feet off the floor and 6 feet away (groups were not explicitly told they could change the initial configuration,= nor were they explicitly prevented from doing so) Students were only allowed to use the nontraditional materials that were provided Building was not allowed during the first half of the competition to provide time for collaborative design To demonstrate distance collaboration and communication, students in the design office were required to develop as-built documents matching what was on constructed on the build floor Scoring was based on aspects of creativity defined by Guilford [2] as well as aesthetics, functionality and documentation.

The initial intent of the competition was to examine how students performed using mobile computing in a distance teaming design event. The primary goal that arose during development of the competition, however, was to help students develop creativity skills by presenting a very openended problem with no best solution; in fact, the judges were unaware of the provided materials beforehand so as not to bias judging with a possible preferred solution. While the competition involved cash prizes, there was no extrinsic negative consequence for competing, such as a poor grade. Creativity is a core component for engineering, essential to innovation. Modern design practice dictates that engineers need to be agile, be able to work in teams, and able to converge on innovative solutions to complex problems [1]. Unlike creativity in other disciplines, creative engineering designs rarely arrive from a single person practicing “Big C” creativity [9], or rather creativity that is considered exceptional in a field (e.g. Mozart, Jim Henson, John Lasseter.) More often, engineering creativity is a matter of being able to not over constrain a problem, think divergently when necessary, and apply “little c”, or everyday problemsolving creativity. More importantly, novelty is insufficient to describe engineering creativity; solutions must be functional and exploitable as well. To this end, Cropley and Cropley [8] discuss creativity specific to the engineering domain as functional creativity. One interesting question that arose from Design Wars was “how does individual creativity affect team creative output?”

[3]. Based on data from the competition, Individual creativity scores from a self-assessment were compared to team averages. From this, informal predictions were made as to which teams might score the highest on creativity as measured by judges scores on the creative components of flexibility, fluency, elaboration, and novelty [2]. The results were somewhat surprising; teams with the highest average scores of individual creativity actually scored the lowest in the creativity of the final product as indicated by the pink, lower right side box in Fig. 1. While the reasons for this are unclear, based on both general creativity theory and application of the innovation phase model described in part IV, the most likely reasons were presumed to be a) insufficient time to incubate creative ideas; b) an inability to manifest creative ideas physically in the constructed project; c) a limit in the ability of creative, or divergent, thinkers to converge upon a single solution.

Mean of Creativity Component Scores for Team

35

30

25

20

15

10 54.0

56.0

58.0

60.0

62.0

64.0

66.0

68.0

70.0

72.0

74.0

76.0

78.0

↑CREAX Mean for Individuals Mean of Individual CREAX Scores for Team

Figure 1. Team aggregate creativeity versus product creativity score for Design Wars [3] Additionally, teams that had a large range of individual creativity scores, often due to a single dominant creative member, also failed to score well in the creativity potion of the competition. The current study, in development, will examine the impact of team composition on overall creative output and consider specifically if different teams perform better during different phases of the innovation process, from ideation to exploitation and realization of design. II.

ENGINEERING CREATIVITY AND INNOVATION

Traditional definitions of creativity tend to emphasize the value of novelty in design. Although novelty is arguably important in all evaluations of creativity, creative engineering designs are unique in that they must exhibit functionality on top of novelty [8]. A successful solution in engineering must satisfy the parameters of the original problem; a creative design extends this solution to provide novelty, elegance and potentially genesis – the ability to extrapolate ideas from the

design to spawn solutions in seemingly unrelated areas. Consequently, it is unfair to evaluate an engineering design using the same metrics that are used to judge, say, a painting or a poem. One means by which to judge creativity is through final evaluation of the product such as by the method described in Cropley et al [4] based on effectiveness, novelty, elegance and genesis. These differ somewhat from the Guilford [2] definitions and are specific to functional creativity. Evaluating contributions to creativity in the design process is a difficult task. While team performance in design has been the subject of innumerous studies, little is available with respect to the specific topic of creativity in team design problems. Specifically, limited research has been completed to date about creativity in engineering design teams and the role individual creativity plays in the creativity of the overall team product or articulation of the team idea. The best current methods for predicting creative outcome in team settings are to apply what has been discovered about the creativity of an individual and translating metrics of individual creativity to measure the creativity of a team as a whole. For Design Wars, the creativity of each individual was evaluated using a self-assessment tool called the CREAX that can be found, free-of-charge, on the internet [10]. As evidenced by the results, individual scores from the CREAX were not a good predictor of team creative output. It is important to be upfront about the a few points about the collected data: The CREAX, while based on basic concepts of creativity and similar in style to more known assessments of individual creativity [5, 6] is not a formally validated instrument. The separation of students into two groups in two locations complicates any data on the design process since there is no formal way to retroactively quantify individual contributions to the designs. Additionally, there is no way to specifically evaluate the effect that limited time had on the convergence and implementation of ideas. In other words, this preliminary study did not allow for separation of ideation and actualization. However, both the data from Design Wars and the somewhat counterintuitive results have allowed for some preliminary consideration of how to best examine team creativity based on individual contributions and how to consider team composition. III.

PRELIMINARY EVALUATION OF DATA

Data collected from the competition includes an individual creativity assessment completed by each participant before the competition (CREAX) and overall team product assessment scores as assigned by judges. By comparing individual creativity scores with the overall team performance as determined by the Design Wars judges, it is possible to determine if patterns exist between individual scores which constitute a team and the overall effectiveness of the team. It should be noted that the judges were not experts in creativity, rather they were selected to provide a broad range of experience and included a pyschologist, an electrical engineering instructors, an industrial eningeering faculty member and an entreprenuer. They were provided basic instruction in the five categories of creativity (Guilford + aesthetics) prior to judging. In order to compare CREAX scores with competition judging scores, a method of characterizing participating groups

A group is named by means of the taxonomy using a simple method (see Table 1). The first portion of the group label indicates the average team CREAX score. Here the team is placed in one of three categories: High, Medium, or Low. These categories indicate the average CREAX score as it compares to the global average. Currently, the global average CREAX score is approximately 62 points. Therefore, it is fitting to set the Medium category from the values 60 points to 64 points, the High category above 64 points, and the Low category below 60 points. The second portion of the group label indicates the variability of CREAX scores within the group. That is, this portion is named for the range between the highest group member score and the lowest group member score. Once again, a team can be placed in one of three categories: High, Medium, or Low. The high category is associated with a range greater than 25 points, the medium category is associated with a range between 15 and 25 points, and the low category is associated with a range below 15 points. Finally, the two categories that a team is identified by are hyphenated. For example, a team with a medium average CREAX score and medium range between the highest and lowest CREAX score would be named Medium-Medium. Surprisingly, no clear patterns have emerged by evaluating the top-performing teams. The team finishing with the highest competition score (by a sizable margin), known in competition as team “N,” is categorized as Medium-Low. That is, the average team CREAX score is close in value to the global average and the variability of CREAX scores among the four team members is low. In other words, nothing should stand out about this team other than their apparent creative homogeneity. The teams finishing second and third, teams “B” and “A” respectively, are much closer in final competition score (within 1.5 points out of 100 total points). Team “B” is categorized as High-High and team “A” is categorized as Low-Medium. It is worth pointing out that overall finish was dependent on more than just creativity and was also directly tied to quality of implementation and documentation. IV.

INNOVATION PHASE MODEL

In examining innovation in organization, Cropley et al [8] examined seven phases of innovation and six personal dimensions to determine the appropriate dimensions for success in different phases of innovation, differentiating for invention and exploitation. The phase model (see figure 2, shown at the end of the paper) allows for determination of when creativity is beneficial to the innovation process. For example, in considering the dimension of process / thinking style, only two of the five phases of invention are better addressed through divergent thinking. The recognition that innovation requires more than simply divergent ideation

allows for a stronger exploration of why highly creative individuals may not develop the most highly innovative products or designs. V.

RESEARCH PLAN

The goal of the research is to develop a taxonomy of engineering team compositions based on the mixture of individual creativities, and to identify the likely effects of those compositions on team creative output. To this end, we will request students in the courses incorporating design projects to take individual assessments of their creativity, and then assign students to four to five person teams for the design project(s). At the completion of the course, the product resulting from the design teams will be evaluated using the CSDS product innovation assessment [5]. Table 1 (below) shows the intended coverage of the teaming conditions.

Table 1. Design of the Experimental Conditions. Divergence of Individual Creativities High  Typical  Low  Divergence Divergence Divergence

Average of Individual  Creativities

based on constituents’ CREAX scores is necessary. The authors of the paper have created a taxonomy based on the average overall CREAX score of group members and the range between the highest and the lowest team member’s CREAX score.

High Team  Average Typical  Team  Average Low Team  Average

X1 X4

X2 X3

The measures that we intend to use for team assignments are defined as: Average of Individual Creativities – each student is assessed on their own creativity, and the mean of the individual scores across the team are computed. Divergence of Individual Creativities – each student is assessed on their own creativity, and the difference between the score of the highest creative individual and the median score of the remaining team members is computed. The team divergence scores will be normalized, and then broken into High, Typical, and Low categories for the taxonomy. Team Creativity – at appropriate times in the design project life cycle, the creative output of the team will be assessed and compared with the output from other teams. Validated instruments for individual creativity will be evaluated to establish the best to examine capacity for individual divergent thinking. The choice of instruments will use the collaborative expertise of psychology researchers well versed in creativity assessment. The expected instrument for assessing team creative output is the Creative Solution Diagnostic Scale (CSDS) [5] which considers effectiveness, novelty, elegance and genesis in examining functional creativity.

The largest obstacle in conducting this research is statistical relevancy - the number of teams that may be formed and assessed during the study period. To provide enough team data points for the study, the process will be conducted in several undergraduate courses, covering students in freshman through senior years, and the process will be replicated twice in each course over the span of the project. It is a desired outcome of this work to determine not only means for teaming that may predict creative output for innovation, but to parlay this information into the development of appropriate interventions for teams less well suited to different phases of innovation. It is not anticipated that there is a single optimal team make-up, but rather, there are more appropriate team compositions for different phases of innovation (Figure 2.)

ACKNOWLEDGMENTS Design Wars was funded by a State of South Dakota Mobile Computing Grant. The research team would also like to thank the local businesses that donated materials and services, the judges, and student participants and volunteers.

REFERENCES [1]

National Academies of Engineering. (2004) “The Engineer of 2020 Visions of Engineering in the New Century.” The National Academies Press. Washington, DC [2] Guilford, J.P. (1950) Creativity, American Psychologist, Volume 5, Issue 9, 444–454 [3] Oihus, Preston, Surovek, Andrea and Jensen, Dean, “Design Wars: Developing Student Creativity Through Competition,” Proceedings of the Frontiers in Education Conference, Oklahoma City, Oct. 2013 [4] Cropley, David H., James C. Kaufman, and Arthur J. Cropley (2011). "Measuring creativity for innovation management." Journal of technology management & innovation 6.3 (2011), 13-30. [5] Kaufman, James C., Plucker, Jonathan A., and Baer, John (2008), Essentials of Creativity Assessment, John Wiley and Sons, Hoboken, NJ 220 pp. [6] Kaufman, J. C. (2009), Creativity 101, Springer Publishing, New York, NY, 242 pp. [7] Cropley, D. H., Cropley, A. J., Chiera, B. A., & Kaufman, J. C. (2013). Diagnosing Organizational Innovation: Measuring the Capacity for Innovation. Creativity Research Journal, 25(4), 388-396. [8] Cropley, D. H. and Cropley, A. J. (2005). Engineering creativity: A systems concept of functional creativity. In J. C. Kaufman and J. Baer (Eds.), Creativity Across Domains: Faces of the Muse, Chapter 10 (pp. 169-185). New Jersey: Lawrence Erlbaum Associates Inc. [9] Kozbelt, Aaron; Beghetto, Ronald A. and Runco, Mark A. (2010). "Theories of Creativity". In James C. Kaufman and Robert J. Sternberg. The Cambridge Handbook of Creativity. Cambridge University Press. ISBN 978-0-521-73025-9. [10] CREAX test of creativity available at: http://www.testmycreativity.com (last visited 7/8/14)

Figure 2. Innovation Phase Model (Cropley et. al 2013)