Principles of Human Computer Interaction in ...

22 downloads 847 Views 352KB Size Report
seeing Crowdsourcing as an opportunity to provide inbound flows of ... categories in R&D platforms, marketing & design platforms, freelancer, idea platforms,.
Principles of Human Computer Interaction in Crowdsourcing to Foster Motivation in the Context of Open Innovation Patrick Brandtner 1, Andreas Auinger 1, and Markus Helfert2 1

Department for Digital Business, University of Applied Sciences Upper Austria Campus Steyr {patrick.brandtner, andreas.auinger}@fh-steyr.at 2 School of Computing, Dublin City University, [email protected]

Abstract. In order to use external knowledge sources for innovation activities in organizations, recently crowdsourcing platforms have been increasingly suggested and used. Critical success factors for such platforms include user motivation and participation, however the effect of those factors is still little understood. The aim of this paper is to analyze the extent to which selected Crowdsourcing platforms consider motivating and incentive factors from a human computer interaction perspective. Motivated by Malone’s principles for designing enjoyable user interfaces we employed this framework as reference to conduct a participatory heuristic evaluation. The results of this paper demonstrate that there are several areas of improvement. At present intrinsically motivating factors in regard to the user interface are only addressed to a limited extent. Keywords: crowdsourcing, open innovation, motivation, human computer interaction, gamification

1

INTRODUCTION

Shorter product lifecycles as well as increasing competition and cost pressure paired with rising quality requirements, product individualization and mass customization challenge enterprises in global markets. Advanced industrial nations are no longer able to compete solely by cost-leadership. The capability to innovate has become a vital core competency in developing a sustainable competitive advantage [1]. At the same time the process of managing innovation is one of the most sophisticated and complex challenges an enterprise faces [2, 3]. An interesting concept in this regard is “open innovation” which describes concept of “Open Innovation” describes a change in an organization’s innovation process and the integration of external stakeholders and knowledge sources in a company’s innovation activities. This not only increases the efficiency, but also the effectiveness of the activities and tasks along the innovation process [4, 5]. Hence, it is important to define

adfa, p. 1, 2011. © Springer-Verlag Berlin Heidelberg 2011

appropriate process-support. Recently the concept of crowd sourcing has been suggested for this phase. The integration of the Crowd to find solutions to current problems and to address urgent issues in organizations is known as “Crowd-Sourcing” [6, 7]. This approach allows organizations to employ large numbers of dispersed works (users) over the internet through open calls for contributions with the goal of finding solutions to problems by outsourcing tasks to the general internet public [8, 9]. In the context of the Open Innovation approach, Crowd Sourcing can be considered as an opening of the innovation process by sourcing out the phase of idea generation to integrate numerous outside competencies in a potentially large and unknown population by using web facilities and Web 2.0 tools and concepts [10]. According to authors like Sloane, Crowd Sourcing is even “one particular manifestation of Open Innovation” [11]. There is a large number of existing web-based Crowdsourcing solutions to support innovation management, which differ from each other in terms of functionality, main purpose and the underlying processes. An important, but often overlooked aspect of these platforms is the human computer interaction, and in particular the motivation and willingness of individuals to contribute to those platforms. Several researchers have suggested that platforms should consider motivating principles to foster and maintain user engagement [7, 12], however these factors are often neglected. Several studies have already investigated factors that influence participation in crowdsourcing [13, 14]. Most of these studies focused on extrinsic motivation, only some of them addressed the role of intrinsic motivation or the importance of the user interface to foster and maintain user motivation [13–16]. The aim of the current paper is to analyze the design of user interfaces of existing Crowdsourcing solutions from a human-computer interaction perspective. We examine how selected Crowdsourcing platforms address intrinsic user motivation in conjunction to the user interface and implement motivating principles to foster human computer interaction in the context of Open Innovation. In a first step, we analyze based on literature, the principles and processes of Crowdsourcing in the context of Open Innovation (section two). In a second phase, relevant types of motivating and incentive factors in the course of crowdsourcing are dealt with (section three). Thirdly, existing Crowdsourcing platforms are selected and the most relevant ones are analyzed and evaluated in regard to motivational factors meant to create, increase and maintain user involvement and motivation in conjunction to user interface design (section four).

2

Crowdsourcing in the Open Innovation Process

2.1

The Open Innovation Process

For this research we selected an appropriate open innovation process in a first step to serve as a reference framework for the particular research. Many process models do not specifically focus on open innovation or do not cover corresponding aspects and factors. One of the most seminal researchers in this context is Henry Chesbrough. First

mentioning the term Open Innovation in 2003 [19], he is often referred to as the “father of Open Innovation” [20, 21]. Based on Chesbrough’s principles and concept of Open Innovation, many open innovation process models have been developed so far, e.g. the Model by Docherty [22], which was employed by Robert Cooper to adapt his popular and widely used StageGate-model [23–25] to become an Innovation process model, which specifically addresses open innovation aspects [26]. It consists of a front-end process, a development and a commercialization stage. Companies should use information across the stages as well as from external sources. This makes open innovation challenging and extremely complex. Especially the activities at the front end process do not only create internal but also need to consider externally generated ideas from multiple sources [26]. 2.2

Crowdsourcing

The term Crowdsourcing was first used by Jeff Howe in 2006 [6] and refers to as “the act of taking a task traditionally performed by a designated agent (such as an employee or a contractor) and outsourcing it by making an open call to an undefined but large group of people.” [9]. According to Gassmann, Crowdsourcing can be seen as an interactive strategy of outsourcing knowledge generation and problem solving to external actors through a public or semi-public call for inputs. Such calls typically adress creative tasks and topics and are usually realized through a website or platform [27]. Crowdsourcing typically involves three categories of actors: the crowd, companies or organizations who benefit from inputs of the crowd and an intermediate platform which links the crowd and the companies and serves as a Crowdsourcing enabler [28]. In this paper an emphasis was laid on Crowdsourcing platforms as well as on the crowd respectively the individual users it consists of. More precisely, user interfaces of most successful platforms were analyzed in regard to motivating and incentive factors. 2.3

The principles of Crowdsourcing in the Open Innovation Process

As mentioned before, Crowdsourcing can be considered as an opening of an organization’s innovation process by sourcing out some of the activities at the front end of innovation, especially of the idea generation phase [10, 11]. Crowdsourcing shares several principles and similarities with the concept of Open Innovation. Chesbrough postulates that an organization is not only depending on internal but also increasingly on external sources of knowledge and that there are inside-out and outside-in flows of knowledge [5, 3, 19]. Crowdsourcing follows the same principle: by distributing knowledge and by opening an organization’s R&D process to the crowd, competitive advantages can be reached [29]. The main difference between these two concepts is that Open Innovation focusses on the innovation process while Crowdsourcing can be applied in many different application domains [27]. Furthermore, Crowdsourcing does on the one hand provide access to a large number of dispersed, anonymous individuals and their knowledge for organizations, but one the other

hand only concentrates on outside-in flows of knowledge in the sense of Open Innovation [28]. In our research we view Open Innovation and Crowdsourcing complementary, by seeing Crowdsourcing as an opportunity to provide inbound flows of knowledge.

3

Motivational factors in conjunction to the user interface

There are basically two categories of motivating factors in Crowdsourcing: intrinsic and extrinsic [30]. Extrinsic motivations are “the motivation to work for something apart from and external to the work itself” [16] and include e.g. financial rewards respectively free products [31] or new career opportunities [32]. As the current research project laid an emphasis on intrinsic motivations only, extrinsic motivations are not part of this research paper. Intrinsic motivation can be defined as “the motivation to engage in work for its own sake because the work itself is interesting or satisfying” [16]. Intrinsic motivations are e.g. exchange of information [33] social identity and influence [32], an entrepreneurial mindset [31], a sense of membership and attachment to a group [34] and also fun, enjoyment or entertainment [35-36]. The aim of the current research project is to analyze motivating and incentive factors in conjunction to the user interface of Crowdsourcing platforms. Motivated by the work of Malone, who developed a framework for designing enjoyable and intrinsically motivating user interfaces [18, 17], we aim to examine crowdsourcing platforms through the lense of this prominent framework. The framework seems suitable as it emphasizes the intrinsic motivational factors of HCI. More specifically, Malone suggests that factors that make computer games enjoyable and fun to use may also be applicable in a none-gaming context [18]. This approach of using game design elements in a different context dates back to Malone himself and is known as gamification [37]. According to Fitz-Walter et al. Gamification is a growing trend to motivate users and enhance user experience [38], which are both important prerequisites for Crowdsourcing too. Malone’s seminal work dates back to the 1980ies, when he conducted several studies about what makes computer games so captivating and exactly which design elements motivate people to interact with computer games [17, 39]. The primary purpose of these studies was to derive recommendations for highly motivating instructional systems, but Malone’s findings and gamification in general are also of great relevance for designing other user interfaces, e.g. Crowdsourcing, Idea Competition or Open Innovation platforms [38, 37]. In his paper [18] Malone developed a questionnaire to analyze the appeal of computer systems based on three categories: challenge, fantasy and curiosity [18]. Those categories and the corresponding subcategories and questions are explained in more detail in section 4.3 of the current paper. According to Malone, those categories include the major features of computer games that can be incorporated into other user interfaces [18]. In the course of the current research, this list is taken as a reference framework and will be adopted for the application of analyzing Crowdsourcing platforms (cf. section 4.3).

4

Selection of Platforms and Evaluation Methodology

For our research, relevant crowdsourcing solutions had to be selected in regard to specific criteria. Subsequent we selected solutions and examined those regarding motivational and incentive factors from a human-computer-interaction perspective based on appropriate measures and Malone’s criteria for designing enjoyable user interfaces. 4.1

Definition of selection criteria

Following from the discussion in section 2.2, criteria were defined to specifically discover and select only those Crowdsourcing platforms which meet the requirements to support the activities and tasks along the innovation process. Applied to the definition of selection criteria for the current project, this means that only Crowdsourcing-solutions with an emphasis on supporting organizations in accessing and effectively integrating the knowledge of the crowd are considered relevant. According to Gassmann, Crowdsourcing activities can be divided into five different application domains: user initiated Crowdsourcing, Crowdsourcing intermediaries, public Crowdsourcing initiatives, idea market places and company initiated platforms [27]. Based on this categorization of Crowdsourcing activities and our understanding of Crowdsourcing in the context of Open Innovation (cf. section 2.3), only intermediary and company initiated platforms are relevant for the current research project. Only these two categories include platforms which support organizations in their innovation activities by providing inbound knowledge flows. Gassmann further subdivides those two categories in R&D platforms, marketing & design platforms, freelancer, idea platforms, product ideas and problem solution platforms and branding and design platforms [27]. In order to reduce the large amount of intermediary and company initiated Crowdsourcing platforms, we focused on three subcategories in an initial phase: R&D platforms, idea platforms, and product idea and problem solution platforms. For each of those we selected one exemplary Crowdsourcing platform in a next step. As an exemplary Crowdsourcing intermediary platform in the R&D area we chose InnoCentive (http://www.innocentive.com/), which is often mentioned in scientific publications to be a quite popular and successful Crowdsourcing platform for utilizing the crowd as one particular knowledge source to support an organization’s R&D activities [40, 41, 29]. As an example for a Crowdsourcing intermediary idea platform, we selected Atizo.com (https://www.atizo.com/) because of the high scientific attention this platform received within specific literature [42, 43, 27]. As a typical company initiated product idea and problem solution platform, we took the example of Dell’s IdeaStorm platform (http://www.ideastorm.com/), which received much scientific attention recently and is considered quite popular and successful [44–46]. Those solutions were subjected to an in-depth analysis regarding motivational and incentive factors according to a specifically developed evaluation scheme (cf. section 4.2 and 4.3)

4.2

Evaluation Methodology

In order to evaluate the selected Crowdsourcing solutions we adopeted a participatory heuristic evaluation [47] using Malone’s motivating principles. This allows combining the domain expertise of a user with a usability-expert’s know-how. By that, not only the necessary usability knowledge but also the required process-expertise can be integrated in the evaluation [48, 47]. Applied to the current research, this means that the evaluation of selected Crowdsourcing platforms is conducted pair-wise, whereby the usability expert gets an overview of the specific platform and its target process before the actual evaluation starts. During evaluation, the usability expert has to fulfill specific tasks addressing Malone’s motivating principles for human computer interaction in presence of the domain expert, who comments on the usability expert’s actions and answers process-related questions when necessary. Hence, the participatory heuristic evaluation was structured by the following methodological steps [47]:  Preparation: ─ An independent examiner develops realistic application scenarios. ─ To be able to perform various, meaningful scenarios, the usability experts get an overview of the solution and its underlying processes. Hereby the challenge of learning the system is taken away from the user.  Evaluation: ─ During evaluation, the usability expert works through the defined application scenarios in presence of the user (domain expert). The user is asked to comment on the usability expert’s actions on the one hand, one the other hand he is also available for comprehension questions regarding the particular sequences of tasks and actions. By commenting and answering questions, the user supports the usability expert in taking a standard user’s role and the corresponding domain expertise. Due to the fact, that Malone’s criteria could not be evaluated on a standard heuristic evaluation scale, we decided to adapt Nielsen’s acknowledged usability evaluation scale, which was designed specifically for usability problems [49], to a simple 3-level rating scale (0=no support, 1=weak support, 2=good support). Following the methodological steps mentioned above, we developed three scenarios which are core elements respectively standard processes in Open Innovation Crowdsourcing platforms:  Scenario 1: Use the provided profile and adjust it according to your preferences, include personal information to position yourself and to present your knowledge.  Scenario 2: Select an existing challenge or problem on the platform and post your own idea or contribution to it.  Scenario 3: Take a look at existing ideas / possible solutions for the challenge and interact with other users respectively respond to existing content elements.

4.3

Definition of evaluation criteria

Based on Malone’s criteria for designing an enjoyable user interface (cf. section 3), we developed a set of criteria respectively a questionnaire to analyze the selected Open Innovation Crowdsourcing solutions (cf. section 4.1) in regard to intrinsically motivating factors (cf. section 3). The following table provides a juxtaposition of Malone’s criteria and our project specific ones, which were adapted to be applicable in the course of the participatory heuristic evaluation (cf. section 4.2): Table 1. Derivation of project specific evaluation criteria based on Malone [18]

1. Challenge

Malone’s evaluation criteria (cf. section 3) (a)  Clear goal definition? Goal  Provision of performance feedback?

(b) uncertain outcoome

 Variable difficulty level?

1.3. Are difficulty levels defined? 1.4. Can the user adjust / select the difficulty level?

 Multiple level goals?

1.5. Is the goal outcome uncertain? 1.6. Do goals offer multiple levels of target attainment?

3. Curiosity

2. Fantasy

(a) Does the interface embody emotionally appealing fantasies?

(b) Does the interface embody metaphors with physical or other systems that the user understands?

(a) Level of informational complexity

Adopted evaluation criteria 1.1 Is a clear goal visible for the user? 1.2. Does the platform provide performance feedback for the user in regard to the level of goal attainment ?

 Use of audio & visual effects as decoration / to enhance fantasy / as representation system?  Use of randomness to add variety without unreliability?

2.1. Does the platform embody emotionally appealing fantasies? 2.2. Does the interface adress the user’s desire for social connection? 2.3. Does the interface embody a feeling of respect and social status? 2.4. Does the platform utilize metaphors with recognised and understood physical or other systems? 3.1. Does the platform provide an optimal level of complexity? 3.2. Are audio and visual effect used as decoration or to enhance fantasy? 3.3.Does the platform use audio or visual effect as representation system? 3.4. Is randomness used to add variety to the platform without making it unreliable?

(b) Knowledge structure

5

 Approriate use of humor?

3.5. Does the platform utilize humor to increase the enjoyment of using it in an appropriate way?

 Capitalization on the user’s desire for clear knowledge structures?

3.6. Does the platform evoke user curiosity by making them think their knowledge structures are incomplete, inconsistent or unparsimonious?

 Introduction of new information when existing knowledge is unsatisfactory?

3.7. Does the platform support users in making their knowledge structures complete, consistent and parsimonious?

Results of heuristic evaluation

In this paper we present a summary of our evaluation results and focus on the most significant observations, the overall and general results are visualized in figure 1. Amongst the selected Crowdsourcing platforms, Innocentive met most of the evaluation criteria. Compared to Atizo and IdeaStorm, it e.g. includes functionality to evoke user curiosity by making user-specific suggestions of relevant challenges according to personal interests and knowledge fields. To encourage user cooperation, Innocentive tries to foster team forming by suggesting forming or joining a group of users when stuck. Furthermore, InnoCentive was the only platform which had different levels of difficulty for challenges (e.g. premium challenge, grand challenge). Overall, the level complexity was perceived as “just right” in comparison to Atizo (simple but functional) or IdeaStorm (too complex). Our analyses revealed the following most important areas for improving motivating and incentive factors:     

1.2. – Performance feedback: None of the tools offered feedback on the level of goal attainment in a systematic way. 1.4. – Adjustable difficulty levels: Although Innocentive offered different difficulty levels, adjusting it was not possible in any of the platforms. 1.6. – Multiple Level targets: In all of the three analyzed platforms, only topand no sub-level targets were defined. 2.1. - Appealing fantasies: Emotionally appealing fantasies were only embodied in a very limited way and mainly addressed the user’s need for selfachievement by awarding badges or titles (e.g. “Dell Community Rockstar”). 2.4. – Use of metaphors: Metaphors were used only to a very limited extent by using terms like “innovation pavilion” or “project room” (Innocentive), symbols like a trophy (Atizo) or titles like “Rockstar” (IdeaStorm).



 

3.2. – Audio and visual effects to enhance fantasy: Such effects were mainly used as representational systems and not to enhance creativity and fantasy systematically. When used to enhance fantasy, it was only in a few challenges and thus depending on the challenge creator. 3.4. – Use of randomness and 3.5. – Use of humor: Neither could any varietyadding randomness be observed, nor was humor systematically used in any of the analyzed platforms. 3.7. - Knowledge structures: Only Innocentive supported the user in completing his knowledge structures by providing the suggestions to form or join a team when stuck. However, additional, more effective features couldn’t be found neither in Innocentive nor in Atizo or IdeaStorm.

Criteria that are already implemented very well across all three platforms are:     

1.1. - Clear goal: Each platform provided clear goal description. 1.5. - Uncertainty of outcome: Goal outcome was unclear in all platforms. 2.2. - Social connection: Internal (groups, teams, community) as well as external (e.g. Facebook, LinkedIn, Twitter) networking was a core element of the analyzed platforms. 2.3. – Respect and social status: Each platform provided top solver rankings, awards or expert search. 3.3. - Audio and visual effects as representation systems: Network maps, symbols or activity diagrams were used for representational purposes, Innocentive even offers a mobile app for download.

The following figure 1 summarize the results of the conducted heuristic evaluation. On a scale from 0 to 2 (0=no support, 1=limited support, 2=good support) the result of each evaluation criterion (cf. section 4.3) is visualized for each platform: Fig. 1. Results of the heuristic evaluation

6

Conclusion

The results of the heuristic usability evaluation demonstrate the importance of motivational factors and the areas for improvement for Crowdsourcing platforms from a

human computer interaction perspective. The study reiterates the lack of research in this area. Using participatory heuristic evaluation together with Malone’s framework for designing enjoyable user interfaces, we have analyzed several crowdsourcing platforms. The paper also demonstrated the usefulness of our evaluation approach, which will be expanded in further research. Especially the use of fantasy and creativity enhancing features and effects, the utilization of metaphors or the systematical use of randomness and humor could be applied to make Crowdsourcing user interfaces more enjoyable and fun to use. Again we observed that gamification is of great relevance and offers great potential in the context of Open Innovation Crowdsourcing platforms. To further analyze the connection between intrinsically motivating factors in games and to refine the recommendations of the current paper, additional studies have to be conducted in the future.

References 1. Wagner, K.: With Innovation Management to organic growth and performance excellence. In: Spitzley, A., Rogowski, T., Garibaldo F. (eds): Open innovation for small and medium sized enterprises. Ways to develop excellence, pp. 7–18. Fraunhofer-Institute for Industrial Engineering, Stuttgart (2007) 2. Maital, S., Seshadri, DVR.: Innovation management. Strategies, concepts and tools for growth and profit, Sage, New Delhi (2012) 3. Chesbrough, H.W., Vanhaverbeke, W., West, J.: Open innovation. Researching a new paradigm. Oxford University Press, Oxford (2006) 4. Reichwald, R., Piller, F.: Interaktive Wertschöpfung. Open Innovation, Individualisierung und neue Formen der Arbeitsteilung. Gabler, Wiesbaden (2006) 5. Chesbrough, H.: Open business models. How to thrive in the new innovation landscape. Harvard Business School Press, Boston, Mass. (2006) 6. Howe, J.: The Rise of Crowdsourcing. Wired magazine, Vol. 14 (6), pp. 1-4 (2006) 7. La Vecchia, G., Cisternino, A.: Collaborative workforce, business process crowdsourcing as an alternative of BPO. In: Proceedings of the 10th international conference on Current trends in web engineering, pp. 425‐430. Springer-Verlag, Berlin, Heidelberg (2010) 8. Kazai, G.: In search of quality in crowdsourcing for search engine evaluation. In: Proceedings of the 33rd European conference on Advances in information retrieval, pp. 165‐176. Springer-Verlag, Berlin, Heidelberg (2011) 9. Howe, J.: Crowdsourcing. How the power of the crowd is driving the future of business. Random House Business, London (2008) 10. Poetz, M.K., Schreier, M.: The Value of Crowdsourcing: Can Users Really Compete with Professionals in Generating New Product Ideas?. In: Journal of Product Innovation Management, 29(2), pp. 245‐256 (2012) 11. Sloane, P.: The brave new world of open innovation. Strategic Direction, Vol. 27(5), pp. 3– 4 (2011) 12. Brabham, D.C.: Crowdsourcing the Public Participation Process for Planning Projects. Planning Theory, Vol. 8(3), pp. 242‐262 (2009) 13. Brabham, D.C.: MOVING THE CROWD AT THREADLESS. Motivations for participation in a crowdsourcing application. Information, Communication & Society, Vol. 13(8) pp. 1122–1145 (2010)

14. Leimeister, J.M., Huber, M., Bretschneider, U.: Leveraging Crowdsourcing: ActivationSupporting Components for IT-Based Ideas Competition. In: Journal of Management Information Systems, Vol. 26(1), pp. 197–224 (2009) 15. Lakhani, K.R., Jeppesen, L.B., Lohse, P.A.: The Value of Openness in Scientific Problem Solving. Working Paper 07-050. Harvard Business School (2007) 16. Zheng, H., Li, D., Hou, W.: Task Design, Motivation, and Participation in Crowdsourcing Contests. In: International Journal of Electronic Commerce, Vol. 15(4), pp. 57–88 (2011) 17. Malone, T.W.: Toward a theory of intrinsically motivating instruction. In: Cognitive Science, Vol. 5(4), pp. 333–369 (1981) 18. Malone, T.W.: Heuristics for designing enjoyable user interfaces: Lessons from computer games. In: Proceedings of the 1982 Conference on Human Factors in Computing Systems, pp. 63‐68. ACM, New York, NY, USA (1982) 19. Chesbrough, H.W.: Open innovation. The new imperative for creating and profiting from technology. Harvard Business School Press, Boston, Mass. (2003) 20. Munkongsujarit, S., Srivannaboon, S.: An integration of broadcast search in innovation intermediary for SMEs: A preliminary study of iTAP in Thailand. In: Technology Management for Emerging Technologies (PICMET), 2012 Proceedings of PICMET ’12, pp. 2117– 2124 (2012) 21. Lindgren, P., Rasmussen, O.H., Poulsen. H.: Open Business Model Innovation in Healthcare Sector. In: Journal of Multi Business Model Innovation and Technology, Vol. 1(1), pp. 23– 52 (2012) 22. Docherty, M.: Primer on open innovation: Principles and practice. In: PDMA Visions Magazine, Vol. 30(2), pp.13–17 (2006) 23. Ahn, J., Skudlark, A.: Managing risk in a new telecommunications service development process through a scenario planning approach. In: Journal of Information Technology, Vol. 17(3), pp. 103–118 (2002) 24. Valeri, S.G., Rozenfeld, H.: Improving The Flexibility Of New Product Development (Npd) Through A New Quality Gate Approach. In: Journal of Integrated Design and Process Science, Vol. 8(3), pp- 17–36 (2004) 25. Van Oorschot, K., Sengupta, K., Akkermans, H.: Get Fat Fast: Surviving Stage-Gate in NPD. In: Journal of Product Innovation Management, Vol. 27(6), pp. 828–839 (2010) 26. Cooper, R.G.: Perspective: The Stage-Gate Idea-to-Launch Process - Update, What’s New, and NexGen Systems. In: Journal of Product Innovation Management, Vol. 25(3), pp. 213– 232 (2008) 27. Gassmann, O.: Crowdsourcing, Hanser Verlag, München (2012) 28. Schenk, E., Guittard, C.: Towards a characterization of crowdsourcing practices. In: Journal of Innovation Economics, Vol. 7(1), pp. 93-107 (2011) 29. Albors, J., Ramos, J.C., Hervas, J.L.: New learning network paradigms: Communities of objectives, crowdsourcing, wikis and open source. In: International Journal of Information Management, Vol. 28(3), pp. 194–202 (2008) 30. Pan, Y., Blevis, E.: A survey of crowdsourcing as a means of collaboration and the implications of crowdsourcing for interaction design. In: Collaboration Technologies and Systems (CTS), 2011 International Conference on, pp 397–403 (2011) 31. Tapscott, D., Williams, A.D.: Wikinomics. How mass collaboration changes everything. Portfolio, New York (2006) 32. Bagozzi, R.P., Dholakia, U.M.: Intentional social action in virtual communities. In: Journal of Interactive Marketing, Vol. 16(2), pp. 2–21 (2002) 33. Ridings, C.M., Gefen, D.: Virtual Community Attraction: Why People Hang Out Online. In: Journal of Computer-Mediated Communication, Vol. 10(1) (2004)

34. Hertel, G., Niedner, S., Herrmann, S.: Motivation of software developers in Open Source projects: an Internet-based survey of contributors to the Linux kernel. In: Research Policy, Vol. 32(7), pp. 1159–1177 (2003) 35. Hars, A., Shaosong, O.: Working for free? Motivations of participating in open source projects. In: System Sciences, 2001. Proceedings of the 34th Annual Hawaii International Conference on, pp. 9 (2001) 36. Lakhani, K., Wolf, R.G.: Why Hackers Do What They Do: Understanding Motivation and Effort in Free/Open Source Software Projects. In: Feller, J., Fitzgerald, B., Hissam, S., Lakhani, K.: Perspectives on Free and Open Source Software. MIT Press. Cambridge (2005) 37. Deterding, S., Sicart, M., Nacke, L.: Gamification. Using game-design elements in nongaming contexts. In: Proceedings of the 2011 Annual Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA'11), pp. 2425-2428. Vancouver, BC, Canada. (2011) 38. Fitz-Walter, Z., Tjondronegoro, D., Wyeth, P.: Orientation Passport. Using gamification to engage university students. In: Proceedings of the 23rd Australian Computer-Human Interaction Conference, pp. 122–125 (2011) 39. Malone, T.W.: What Makes Computer Games Fun. Xerox Palo Alto research center (1981) 40. Saxton, G.D., Oh, O., Kishore, R.: Rules of Crowdsourcing: Models, Issues, and Systems of Control. In: Information Systems Management, Vol. 30(1), pp. 2–20 (2013) 41. Brabham, D.C.: Crowdsourcing as a Model for Problem Solving: An Introduction and Cases. In: Convergence: The International Journal of Research into New Media Technologies, Vol. 14(1), pp. 75–90 (2008) 42. Muhdi, L., Daiber, M., Friesike, S.: The crowdsourcing process: an intermediary mediated idea generation approach in the early phase of innovation. In: International Journal of Entrepreneurship and Innovation Management, Vol. 14(4), pp. 315–332 (2011) 43. Mladenow, A., Kryvinska, N., Strauss, C. Towards cloud-centric service environments. In: Journal of Service Science Research, Vol. 4(2), pp. 213–234 (2012) 44. Schildhauer, T., Voss, H.: Open Innovation and Crowdsourcing in the Sciences. In: Bartling, S., Friesike, S. (eds): Opening Science, pp. 255–269. Springer International Publishing, Cham (2014) 45. Kimmel, A.J., Kitchen, P.J.: WOM and social media: Presaging future directions for research and practice. In: Journal of Marketing Communications, Vol. 20(1-2), pp. 5–20 (2014) 46. Dahlander, L., Piezunka, H.: Open to suggestions: How organizations elicit suggestions through proactive and reactive attention. In: Research Policy (2013) 47. Sarodnick, F., Brau, H.: Methoden der Usability Evaluation. Wissenschaftliche Grundlagen und praktische Anwendung. Verlag Hans Huber, Bern (2010) 48. Brau, H., Schulze, H.: Kooperative Evaluation - Usability Inspektion in komplexen und verteilten Anwendungsdomänen. In: Hassenzahl, M. (ed): Usability professionals 2004. Berichtband des zweiten Workshops des German Chapters der Usability Professionals Association e. V., German Chapter der Usability Professionals Association, Stuttgart (2004) 49. Nielsen, J.: Reliability of severity estimates for usability problems found by heuristic evaluation. In: Posters and short talks of the 1992 SIGCHI conference, pp 129-130 (1992)