Beyond Open Source Software: A Framework ...

4 downloads 9398 Views 535KB Size Report
development has traditionally focused on the development of software-based ...... with Wikipedia, where an open source wiki software platform (MediaWiki) was custom-built to serve .... Brisbane, Australia: Association for Information Systems.
Beyond Open Source Software: A Framework, Implications, and Directions for Researching Open Content Chitu Okoli John Molson School of Business, Concordia University, Montreal Kevin Carillo Information Management, Toulouse Business School, Toulouse July 2013

Abstract The same open source philosophy that has been traditionally applied to software development can be applied to the collaborative creation of non-software information products, such as books, music and video. Such products are generically referred to as open content. Due largely to the success of large projects such as Wikipedia and the Creative Commons, open content has gained increasing attention not only in the popular media, but also in scholarly research. It is important to rigorously investigate the workings of the open source process in these new media of expression. This paper introduces the scope of emerging research on the open content phenomenon, other than open source software. We develop a framework for categorizing digitized works as utilitarian, factual, aesthetic or opinioned works. Based on these categories, we consider the applicability of some implications of findings from open source software research for open content. We review some key theory-driven findings from open source software research and assess the applicability of extending their implications to open content. We present a research agenda that integrates the findings and proposes a list of research topics that can help lay a solid foundation for open content research. We also briefly review the literature for some specific directions of open content research, involving the quality of products, the marketing of digital music, and open content in developing countries. Keywords: Open content; open knowledge; free cultural works; open source software; free software; Wikipedia; Creative Commons

Introduction The same open source philosophy that has been traditionally applied to software development can be applied to the collaborative creation of non-software information products, such as encyclopedias, books, and dictionaries. Most notably, the ten-year-old Wikipedia is a comprehensive general encyclopedia, comprising over 18 million articles in over 250 languages, built using the “open source” model. This extension of the open source model to apply to non-software information products is generally called “open content” (Pfaffenberger 2001). 1

As open content begins to develop in its breadth and depth of applications, the possibilities of new community-created media products is endless: open books, music, video, poetry, recipes—indeed products of just about any medium (see http://freedomdefined.org/Portal:Index). Information systems development has traditionally focused on the development of software-based systems, focusing on the human interaction with information and communication technologies to enable individual and organizational goals. However, the information age has introduced the digitization of virtually every medium of human recorded expression. In this article, we refer collectively to such works as digitized works, referring to books, recorded music, videos, articles, and all other fixed works, as distinct from other kinds of human expression (such as ideas and processes) that are not tangibly fixed. With the advent of the Internet, such works can now be developed using Web-based information systems that enable geographically- and temporally-distributed content developers to collaborate on their creation. Whereas numerous studies have provided guiding frameworks for research on open source software (OSS) (Aksulu and Wade 2010; Feller and Fitzgerald 2000; Krogh and Hippel 2006; Jin, Robey and Boudreau 2007; Lerner and Tirole 2005, 2001; Nelson, Sen and Subramaniam 2006; Niederman et al. 2006; Rossi 2004; Scacchi 2007), there is little research that has attempted to encompass all forms of open content in a comprehensive framework such as to facilitate the learning of fundamental principles that apply in this broader phenomenon, including OSS among other forms. This article presents a framework that can serve as theoretical base for the scholarly study of the nascent and rapidly growing phenomenon of open content. With the experience from open source software, much has been learnt and yet needs to be learnt; however, the applications to diverse media are much broader and far-reaching than just to software. To understand how and why open content is becoming increasingly significant in today’s information society, it is necessary to have some understanding of the development of open source software. Indeed, although open source software preceded open content, it should properly be classified as a subset—the first phenomenon—of open content. We begin by defining the term, which is not such a simple matter. However, before developing a definition, it is theoretically important to distinguish our definition of open content from other categories of intellectual property that might be labelled as such. In particular, we distinguish open content from the following categories: 1. User-generated content, where users contribute content to a Web service; the contributors usually grant the host website a non-exclusive license to distribute the content, but do not usually grant rights to redistribute the composite content; Moreover, the content is usually presented as atomic contributions (such as user comments, uploaded videos or pictures), with limited cumulative development of individual works or contributions. 2. Social media, where the focus is on interaction between Internet users and the resulting communal and technology-mediated sociological effects; this is closely related to user-generated content with a focus on interpersonal communication. Licensing terms often are quite restrictive in the reuse or repurposing of the content. 3. Open standards, which permit royalty-free implementation of a technical specification, but strongly discourages derivatives, which would defeat their purpose of standardisation (Zhu 2007; Mukhtar and Rosberg 2003). An open standard is more akin to an “open patent”, which we briefly discuss later; it is distinct from open content. In contrast, there are a few terms that are commonly used that are indeed generally within the scope of what we call “open content” here: 1. Free content, that is, content available free of charge (which we refer to as gratis content), which usually permits download at no cost, but not modification or sale. This includes “freeware”, software that may be downloaded and executed at no charge.

2

2. Open access, which is a specific kind of free or gratis content as described in the previous item, mainly referring to the gratis availability of books or scholarly articles (Churchill and Vanderbeeken 2008). 3. Creative Commons-licensed content, which is licensed specifically to meet various needs of open content providers (Cheliotis 2009). 4. Open Education Resources (OER), which often include a clause forbidding commercial usage, but is normally otherwise free for reuse, modification and distribution (Downes 2007). The one common feature across all these latter categories is that they are made available to the public under a license that permits users to redistribute them at no charge. Other restrictions do apply in all cases, even if they be so minor as an obligation to acknowledge the rights owner, which distinguishes these works from the public domain, for which no restrictions apply. This article proceeds as follows: having motivated the need for open content research and defining the scope of this topic, we proceed in the next section with a rigorous development of a formal definition of open content suitable for this research program. We next provide a general review of the scholarly research to date that has studied open content according to our definition here. In the following two sections, we develop a categorization for works that is particularly relevant for understanding the implications of open content development. Based on these categories, we then consider the applicability of some implications of significant theory-relevant findings from open source software research for open content. We follow this with a discussion of some specific directions of open content research, and then finally conclude the article.

Definition of open content There have been a number of significant attempts to define open content. Most of these attempts have essentially tried to broaden the definition of free software or open source software to apply to works of all kinds. Thus, in this section, we review the development of such definitions for software and the attempted analogous for other kinds of works. We use this as a background to present our own definition and highlight how it differs significantly from what has preceded. From free software and open source software to non-software In the late 1980s, the Free Software Foundation developed “the Free Software Definition”, a clear, succinct definition of what they termed “free software”, emphasizing “free” to indicate software that confers freedom and rights to users, rather than software available at no charge (Wikipedia contributors 2009b). In 1997, the Open Source Initiative developed the similar “Open Source Definition”, emphasizing developers’ rights to access, modify, and distribute source code (Perens 1999). However, for several years there has been no comparable precise specification of what qualifies for the label “open content”. As a result, this label is used for media as diverse as public domain texts released on the Web (such as Project Gutenberg, http://www.gutenberg.org) and open access scholarly journals that are made available for free download, yet do not permit modifications of any kind. There have been two significant responses to this ambiguity. In 2005-2006, Pollock, Keegan and Walsh developed the Open Knowledge Definition (OKD) (http://www.opendefinition.org/okd/), an attempt to generalize the Open Source Definition to open content and “open data”. The OKD has eleven conditions for considering a work “open”, the most important of which are 1) available gratis; 2) permits redistribution; 3) permits distribution of derivative works; 4) no technological barriers to use, such as digital rights management; 5) “no discrimination against persons or groups” permitted; and 6) “no discrimination against fields of endeavor” permitted (in particular, commercial reuse must be permitted). 3

While the OKD aligned themselves with the open source software movement in modeling their definition after the Open Source Definition, another group led by Erik Möller (most of them significant figures in Wikipedia’s organizational structure, the Wikimedia Foundation) developed what they style the Definition of Free Cultural Works (DFCW) (http://freedomdefined.org/Definition), patterned closely after the Free Software Definition: 1) the freedom for anyone to use the work for any purpose without restriction; 2) the freedom to full access to the source material used for generating the content; 3) the freedom to redistribute copies of the original work; and 4) the freedom to redistribute modified versions of the work (derivative works). The question of using a term like “open content” versus one like “free content” or “free cultural works” harkens directly to the fierce debate between advocates of “free software” versus “open source software”. Free software advocates stress that the most important meaning of the word “free” is “liberty”, and that this appellation is thus the most appropriate for software that accords users certain prescribed liberties. They criticize the term “open source” as meaning nothing more than that people are allowed to see the source code, without reference to any of the other important liberties that form part of their official Free Software Definition (nor even of the Open Source Definition, for that matter) (Stallman 2007). Open source software advocates counter that the most common understanding of “free software” is simply “freeware”, that is, software that is available at no charge, regardless of license or distribution rights. They argue that this understanding undermines the important principle of access to the source code. While they concede that “open source” is not completely descriptive (though both sides admit that the English language provides no better alternative than what they respectively attempt), they argue that the term is novel to novices, and invites a definition on their own terms, unlike “free software”, whose meaning novices incorrectly assume to understand (Perens 1999; Raymond 2001). In our view, the debate is not primarily about the appropriateness of the terms; rather, the terms have become banners representing two significantly different philosophical perspectives. Free software advocates insist on the need to defend what they see as a human right to access to information, embodied in this case as full access to software, such a critical component of our contemporary information society. They perceive open source advocates (and under similar terms, the Creative Commons organization) as sell-outs who are willing to compromise information rights for the sake of gaining short-sighted short-term gains in partnering with proprietary-minded businesses. Open source advocates, in contrast, tend to extol the technological and societal benefits of open source software and advocate it as a social good, without seeing the need to insist on it as a human right. They explicitly chose the term “open source” to distance themselves from the deliberate revolutionary implications of “free software” (Raymond 2001). The DFCW was developed to guard and shape the very existence of open content development communities. Open content uses intellectual property law to preserve certain freedoms (hence the name, “free cultural works”) regarding the creation, modification, and sharing of any information or content that can be stored in digitized format. Similar to the description of free software (http://www.gnu.org/philosophy/free-sw.html), the DFCW considers a license to be “free” if it grants users of the content the following key rights: 1. The freedom for anyone to use the work for any purpose without restriction. There are no restrictions against commercial, military, foreign, or any other use, and discrimination against users for any reason is expressly forbidden. 2. The freedom to full access to the source material used for generating the content. For textual content, the text itself is the source material. However, for other content such as performed music or dances, this right grants content creators access to the music scores and the choreography sequences, respectively. (Of course, for computer software, this would refer to the source code.) When a content creator sees how a work was actually composed, as specified in the 4

source material, they can fully understand the inner workings and can intelligently modify it as they deem appropriate. 3. The freedom to redistribute copies of the original work. Not only does open content freely permit redistribution, but it also permit sale of the work at any price. Restrictions on commercial redistribution disqualify a work as being “free”. 4. The freedom to redistribute modified versions of the work (derivative works). This includes absorbing the work, in whole or in part, into other works created by other content creators, and to redistribute and sell these derivative works. While the DFCW defines content as a FCW if its license grants these four freedoms, it does permit certain restrictions. “In particular, requirements for attribution, for symmetric collaboration (i.e., ‘copyleft’), and for the protection of essential freedom are considered permissible restrictions” (Erik Möller 2008). Free content versus open content We choose not to use the term “free content” since the most intuitive meaning of that English term is “content that is available for free, that is, at no financial cost”. (In contrast, the word libre used in French or Spanish is quite appropriate to translate the same idea as “open content” in English; thus, we would translate the expression as œuvres libres in French and obras libres in Spanish.) In English, we do not consider it helpful to have to further specify that our meaning is actually “free as in free speech, not as in free beer”, as the Free Software Foundation is forced to do. For example, a simple search in June 2009 of Proquest’s database yielded only 14 peer-reviewed articles with the key phrase “open content”, all of which referred to information products that are made available to the public under terms less stringent than “all rights reserved” copyright. “Free content” yielded 10 results, and all but one meant “content that is free of charge”; the sole exception was a reference to Wikipedia, the “free content” encyclopedia, which is the only usage that fits our meaning in this article. This demonstrates our point that the common understanding of “free content” is free-ofcharge content, not freedom-granting content. Incidentally, there were 0 articles found on the key phrase “free cultural works”. Open content With this background, we now present a formal definition: Open content is any digitized work for which the rights holder authorizes redistribution at no financial charge, while perhaps imposing some conditions and retaining some restrictions (such as imposing requirements of attribution or copyleft, or restricting commercial redistribution or redistribution of modifications). Work in this definition is a general term that encompasses a broad range of cultural creations covered by intellectual property laws. For the specific genres of work that our definition covers, we refer to the Berne Convention for the Protection of Literary and Artistic Works (1971) and the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS, 1994). The appendix to this article presents the full text of the relevant sections of these international agreements that bind all member nations of the World Trade Organization, covering almost 200 countries. In summary, these agreements specify the following categories of works that we include in our definition:  “Literary and artistic works” (Berne Convention Article 2.1): books, articles, lectures, speeches, drama, dance, music, lyrics, video, fine art, sculpture, photography, maps, sketches, etc.  Computer programs and databases (TRIPS Article 10).  Industrial designs and textile designs (TRIPS Article 25).  Layouts (topographies) of integrated circuits (TRIPS Article 35). 5

We do not include any other kind of intellectual property in our reference to “works”. In particular, we explicitly exclude the following other kinds of intellectual property that are covered in Part II of TRIPS: trademarks (Article 15), trademarked geographical indications (Article 22), patents (Article 27), and trade secrets (Article 39). Trademarks are excluded because an “open trademark” is an oxymoronic concept—permitting widespread use of a trademark could invalidate trademark protection. An “open trade secret” is likewise nonsensical. Patents are also excluded because they are legal monopolies granted on ideas; ideas are not tangible works. Although there is such a concept as an open patent1, a patented idea is fundamentally different from the fixed expressions of that idea which are within the scope of our open content definition. Our specification of digitized works highlights the fact that we are not focusing on any physical media by which the works might be distributed; we are focusing on the information content of the work itself. To facilitate sharing and repurposing of the content at zero marginal cost, the work must be provided digitally; otherwise, there needs to be payment for some physical medium of delivery, which takes the case beyond our scope. In our definition, open content must be distributable free of charge, meaning that the rights holder has waived their economic right to remuneration. This does not mean, however, that open content cannot be sold—that depends on the specific restrictions on the license, as we will discuss shortly. It does mean, however, that regardless of how it is obtained, once anyone obtains the open content work, they are legally authorized to redistribute it to others without having to pay any royalties to the rights holder. Our definition of open content generally corresponds with works that might be covered by the Creative Commons suite of licenses, with a few notable differences. Accordingly, the conditions and restrictions in our definition consist of the following general categories:  Attribution: Most open content licenses, other than public domain dedications, require attribution of the rights holder. All current Creative Commons licenses require attribution (the “BY” clause), though some of the first-generation licenses did not.2  Commercial or non-commercial use: Many open content licenses restrict redistribution to noncommercial purposes. Creative Commons offers such an option with the “NC” (non-commercial) clause. However, the OKD and DFCW particularly decry such restrictions—open knowledge or free cultural works must not restrict any domain of application for the licensed works.  Other restrictions on use: According to our definition, a work would still be considered open content even if it were to restrict some uses, such as military or morally objectionable applications. However, most public open content licenses, including all Creative Commons licenses, do not feature such exclusions.  Authorization of distribution of derivatives: Open content licenses might or might not authorize users of the work to distribute modifications. Fair dealing and fair use laws permit anyone to modify a work that they have legally obtained, but redistribution of the modifications require permission of the rights holder. Creative Commons provides an “ND” (no derivatives) clause for works that do not permit redistribution of modified versions.  Copyleft: For open content where distribution of derivative works is authorized, the rights holder might impose a copyleft condition. This means that any distributed modifications must be likewise shared with a license (usually the same license as the original, or a related one) that authorizes users to freely redistribute modifications. The goal of such a clause is to ensure that

1

http://en.wikipedia.org/wiki/Open_patent Creative Commons dropped its licenses that do not require attribution because these licenses were rarely used (http://creativecommons.org/weblog/entry/4216) 2

6



improvements to the original are freely shared to the public. Creative Commons offers copyleft with their “SA” (share-alike) clause. Format of distribution: Some public open content licenses specify that works thus licensed, and particularly redistributed modifications, must be provided in some specified format. Most notable are the open source software licenses that require distribution of the human-readable source code of the software. For non-software open content, there usually is not a corresponding concept, since very often, to access a digital work is to obtain the ability to modify it. However, this is not always the case. For example, obtaining a video does not permit easy modification of the audio track or of the individual frames. In recognition of these concerns, the DFCW explicitly requires freedom of access to the source material; we comment on this point later when discussing music and video as aesthetic works. However, we are unaware of any non-software open content license that requires access to the source components of the work. Another criterion related to format of distribution involves some licenses forbidding the protection of distributed content using digital rights management (DRM) that would hinder potential consumers’ ability to access the content. The OKD explicitly forbids such measures, as do all Creative Commons licenses3.

Open content licensing As with open source software, the most important part of the institutional infrastructure that permits the existence of the phenomenon is the existence of open content licenses (Okoli and Carillo 2005). The oldest open content “license” is actually not a license: it is releasing works into the public domain. This either occurs automatically after the legal rights term expires (for copyright, it is life of author plus 50 years in most places, though life plus 70 years in the United States), or by the creator explicitly releasing the work into the public domain. Although not a license, works in the public domain fully qualify as open content according to the OKD or DFCW—with the one exception that no one is required to make the source material available, if applicable. For example, photographs taken in the 19th century might be in the public domain, but the source negatives from which they are printed are generally not available. Note, however, that there is no copyleft provision for works in the public domain—anyone can make modifications and claim full copyright protection on their original portions of the derivative work. The first explicit open content license was the GNU Free Documentation License (FDL or GFDL, http://www.gnu.org/copyleft/fdl.html) , first written by the Free Software Foundation in 1999 as a textual complement of their GNU General Public License (GPL) for free software (Wikipedia contributors 2009a); it was designed mainly as an instrument whereby the documentation accompanying free software would likewise be free. Although it predates the OKD and DFCW, since it was based on their common ancestor in the GNU General Public License, this license qualifies as an Open Knowledge and FCW license, with the added provision of copyleft—those who distribute derivative works licensed by the FDL must likewise release their work under the same license. The second major effort towards licenses for open content has been the spectrum of licenses designed by Creative Commons (CC, http://www.creativecommons.org) (Garcelon 2009). CC licenses all require attribution of the original author (the BY label), but permit users of their licenses to choose among any combination of three optional restrictions: 1. NonCommercial (NC): Works may be restricted to non-commercial usage only. 2. NoDerivs (ND): Distribution of derivative works may be forbidden. 3

http://wiki.creativecommons.org/FAQ#Is_Creative_Commons_involved_in_digital_rights_management_.28DRM.29.3F

7

3. ShareAlike (SA): Derivative works, if distributed, may require redistribution under the same license that requires derivative works (copyleft provision). The spirit of CC has been to give content creators a choice in deciding what level of “openness” they desire in distribution their works, ranging from a traditional “all rights reserved” copyright to the strongly “free” mentality of the GNU FDL, or the lack of restriction of public domain. Although we will later comment on some implications of their licenses, we must note that only two CC licenses meet the standards of the OKD and the DFCW: 1. CC-BY: The attribution-only license, that only requires attribution of the original author. This license permits commercial usage, redistribution of derivatives, and imposes no copyleft requirement. 2. CC-BY-SA: the attribution-share alike license, that adds a copyleft requirement to the BY license—this is a permissible restriction under the OKD and the DFCW. It is important to note, though, that although the NonCommercial restriction significantly limits modification and redistribution, it does not forbid it. What it does is effectively require reusers to negotiate terms for royalty payments in the case of redistribution of modified works. (The same could be said to be true of the NoDerivs restriction, though this is typically used when the author has no intention of authorizing modifications of their work under any terms.) Although this is against the spirit of free culture as embodied in the FDL, it is a significant note regarding some categories of open contents, such as Open Educational Resources, where the non-commercial restriction does not significantly restrict modifications and community development of works; we discuss this in more detail later. By far the bulk of open content works (as defined by the OKD and DFCW) are licensed under either the FDL, the CC-BY or CC-BY-SA. A few other open content licenses exist, such as Against DRM (http://www.freecreations.org/Against_DRM2.html), designed to forbid use of DRM in information products, and the Free Art License (http://artlibre.org/licence/lal/en/), targeted to works of art.

Review of scholarly research on open content Not surprisingly, considering the recent nature of the field of research, other than research on open source software, the bulk of scholarly work thus far conducted on open content according to our definition has been the extensive body of research on Wikipedia. Some extensive literature reviews have been conducted on this work (Okoli et al. 2012). For this article, we focus only on reviewing open content other than Wikipedia; according to our knowledge, such a review has never as of yet been conducted. We searched the full citations and abstracts of some relevant scholarly databases (EBSCO, Sage Publications, ProQuest, and ISI Web of Knowledge) for the keywords “open content”, “free content” and “free cultural works”. After removing Wikipedia-related studies and those that did not meet our open content definition (for example, our searches brought up several studies on the Open Content Alliance, which is a digital archive initiative, but is not actually “open content”), we only found 19 peerreviewed journal articles; we summarize their contents here. Cheliotis (2009) rigorously investigated the implications of open content licensing based on the characteristics of the work licensed. We have discussed his open content categorizations in detail, as well as some of his suggestions for licensing various kinds of content. In addition, he proposed decision trees that model authors’ reasoning in how to license their content and empirically tested them on the distribution of Creative Commons license choices in Flickr, a site where users post and share their photographs. Although his study broadly involved Creative Commons licensing and not only open content, the implications for the open content licenses in his study are quite important. 8

A few scholars have discoursed legal aspects of open content licensing. Liao (2006) traces some of the intellectual property reforms that Western societies have been experiencing (including questioning the “property” aspect of “intellectual property”), and encourages Chinese society to aim towards a “creative da-tong” (state of societal prosperity through creativity) by incorporating the emergent open cultures, including open content creation, into its culture and copyright reforms. This advocates a focus on open content development for the sake of promoting the resultant products, beyond just the economic returns to creators. Armstrong (2010) discusses the legal problems in the United States with open source and open content licenses in light of the American legal restrictions on copyright transfers; the main problem is that American law generally favours authors’ recovering their rights to reclaim copyright, even if they have transferred them in the past. While admitting that such legal interpretations are designed to protect authors from abusive corporations, he presents the risk that open source and open content licenses could be legally revoked by the authors or by their heirs. Chiao (2010) economically models the difference in caution in redistribution for individuals versus the society in face of possible legal liability issues; he also experiments on optimal team size modularity in open content teams. Certain studies describe specific websites or projects that implement open content models, rather than treating the open content methodology or philosophy. Such projects include the webgis system for an urban planning geographical information system (Budoni et al. 2007); the Digital Universe, an open content encyclopedia that, unlike Wikipedia, uses named experts to create its content (Korman 2006) A very significant subdomain of open content research has treated Open Educational Resources (OER), also known as open courseware, where the goal is to make educational materials available for liberal sharing and cumulative development. Although most OER permits modifications and distributions of derivatives, commercial redistribution is often forbidden. Whereas this disqualifies most OER as Open Knowledge or FCW and limits commercial development, this restriction is of little relevance to most educators, who are mostly interested in use of the educational materials for pedagogic purposes, which are permitted; the restriction against commercial distribution does not affect most of the community that uses and develops OER. Thus, unlike other open content that does not qualify as Open Knowledge or FCW, the implications of open content development are largely applicable in the case of OER as far as the academic community is concerned. Wiley and Gurrell (2009) relate a brief history of open content and open educational resources, highlighting some of the defining moments that defined the course of developments. Downes (2007) discusses various models for providing OER sustainably, especially in the presence of pressures to provide content gratis (that is, at no charge to the user). He presents many perspectives, involving funding models, technical models, content models and staffing models that provide viable possibilities, a number of which have already been successfully deployed. Meyers et al (2008) describe an open content textbook on bioentrepreneurship; a large part of the project’s motivation was to educate students in developing countries. Couros (2006) studied the open culture practices of educators, including their creation and use of open content. Schweik et al (2009) described an online introductory course on geographic information systems which is part of an open content curriculum. Schuwer and Mulder (2009) describe a successful OER initiative carried out by the Dutch Open Universiteit Nederland. An important related aspect of open content that has been widely commented on by scholars is open access. A number of scholars, especially in library science, have commented on the implications of the Open Content Alliance, a consortium established to set a direction for the digitization of copyrighted books and making them available for limited access to the public through libraries and non-commercial providers (Swartz 2007; Dougan 2010). This Alliance is largely a response to Google’s commercial book digitization efforts (McShane 2008). Ven et al (2008) present libOR, a collection of open content data sets for operations research. Open access is of great interest in providing access to knowledge in developing countries (Nwagwu and Ahmed 2009). In addition to discussing open access, Ballantyne

9

(2009) briefly discusses the beginnings of open content licensing in the International Rice Research Institute. Although not directly related to open content, we note one other related article: Schweik et al (2005) suggested that the open source or open content development model could be used to reform various academic practices such as peer review and the collaborative development of scholarly work, which they demonstrate in modeling change in land-use. Our review shows that, other than Cheliotis’ (2009) study on licensing implications for works, there is little structure or general direction for the development of open content. In the following section, we develop a framework for categorizing works in a way amenable to theorizing about their development when they are licensed using open content models.

Dimensions of works for open content licensing Our definition of open content is much broader than the restrictive definitions of the OKD and the DFCW. From a theory perspective, those definitions have the advantage of bearing the same legal characteristics as open source software, and thus the theoretical knowledge that has been gathered in OSS literature could be reasonably expected to have some applicability to those kinds of open content. However, with our broader definition, which includes works with different license terms (such as permitting only non-commercial redistribution), then such open content is fundamentally different from OSS, and OSS theoretical knowledge cannot be expected to apply. It must be noted, however, that open source software is a very particular kind of work, and has some fundamental characteristics that might not be applicable to all kinds of open content. To help understand the nature of these characteristics of open content, we draw from some past thinking on the subject. In a discussion of the role of traditional copyright laws in contemporary society, Richard Stallman, founder of the free software movement, described three categories of works with fundamentally different characteristics that would affect the most appropriate copyright licensing terms (Stallman 2002). First, he described “functional works”, where the goal is produce a useful product; “this includes recipes, computer programs, manuals and textbooks, reference works like dictionaries and encyclopedias” (2002, 141); this is the category that includes open source software. Second, he described “works whose purpose is to say what certain people think” (2002, 142), that is, statements of people’s subjective opinion, such as essays. Third, he described “aesthetic or entertaining works” (2002, 142) such as novels, music, and non-documentary films. In a study of the implications of various Creative Commons licenses, Cheliotis (2009) broadly categorized works as either functional or cultural goods. He described a functional good as one whose goal is to fulfill a consumer’s practical needs, whereas a cultural good is one that mainly serves to entertain the consumer. Partially based on Stallman’s three categories and Cheliotis’ two, we identify two orthogonal dimensions along which works can be classified, yielding four distinct categories, with pertinent implications for open content. For our classification, we consider works primarily from the perspective of how the consumers of the works might assess or judge them. The first aspect of our classification involves two distinct dimensions based on the fundamental perspective of how reality is observed or evaluated. First, reality can be assessed ontologically as being more or less relativistic or more or less universalistic—that is, whether there are multiple realities, each depending on the relative perspective of the individual, or if there is a universal reality that applies to all people. The second dimension concerns how the value or merit of a work is assessed, whether objectively based on more or less the same criteria for all people, or subjectively varying for each individual. The relativist-universalist dimension concerns the nature of the reality that is being observed. 10

In the objective-subjective dimension, there is no consideration per se of whether the work is “real” or “true”, but rather a consideration of if it is valuable, useful, or worthy of appreciation. Relativist versus universalist works The first dimension borrows from Järvinen’s (2008) taxonomy of information systems research. He distinguishes between the “value-laden” design science paradigm, where certain outcomes are considered more valuable than others, and the “value-free” natural/social science paradigm, where the goal is to ascertain the real and actual state of affairs in the world, with no preference given to any particular outcome. We frame these distinctions based on the philosophical duality of relativism versus universalism. Relativism holds that truths or values are not absolute; they depend on factors both intrinsic and extrinsic to individuals. Universalism holds that truths or values are universal and absolute, irrespective of the subject or context. In this dimension, we present two contrasting approaches to classifying works based on how their quality is judged (that is, a value judgement): works can either be judged based on a relativist assessment of how valuable or worthy of appreciation they are (Järvinen’s value-laden category), or based on a universalist assessment of how they conform to some universallyheld standard (analogous to Järvinen’s value-free category). For example, software and fine art are judged relativistically: they are not judged as “right or wrong”, but rather as “useful or beautiful”, which are relativistic standards. In contrast, maps and news reports are judged universalistically: what matters is whether they are accurate and faithful to the real facts of the situation. Although for purposes of classification we present this dimension as a duality, it should be properly considered a spectrum, with works containing characteristics that might be more or less relativist or universalist. Objectively-evaluated versus subjectively-evaluated works The second dimension involves the consideration of whether the value or quality assessment is generally considered an objective judgement (based on highly objective characteristics that do not vary significantly regardless of who is making the evaluation) or if the assessment is generally considered to be subjective (where it is understood and accepted that different people would evaluate the work differently based on their own ideologies or personal preferences). Some works are judged based on their artistic or aesthetic merits. Paintings, musical compositions, poetry and fiction are common examples of works that are judged not so much in terms of whether they are right or wrong as much as by whether they are beautiful, ugly, or plain. We label these “subjectively-evaluated works”. The other category consists of works that have an objective criterion for determining their quality: the degree to which they are accurate according to some extrinsic standard, or to which they achieve some independently defined goal or purpose. Textbooks, software, and encyclopedias are examples of what we call “objectively-evaluated works”. Works from this category are judged on their accuracy, usefulness, practicability, and other quantifiable, objective criteria of merit or value. As with the relativist-universalist dimension, the objective-subjective dimension should be considered more of as a spectrum than as a discrete categorization. This distinction is quite significant in the possibilities of collaboration in an open content project. The direction of a subjectively-evaluated work, such as an open novel, must be agreed upon by the principal authors. Because there is no objective criterion on where a plot should lead, only a very small group of authors (typically just one) would determine the direction of the plot. Arguments about where the storyline should go would be ceaseless otherwise. While contributors could add polished narratives, their collaboration would necessarily be limited. It is important to note, of course, that the openness of content has nothing to do with the creation process: it only relates to the distribution terms of the final product. What makes a novel “open” in this sense is that anyone could take the final novel and then rewrite the plot as they please, and then redistribute their modified version. Even if each new version is written by only one author, such a work is fully open. What we are noting here, however, is that in

11

subjectively-evaluated works, one of the most interesting and important aspects of open content creation—the collaborative creation process—is severely limited by the nature of the works. In the case of objectively-evaluated works, it is much easier to set some objective standard as the goal towards which the work will lead. Most large open source software projects have some sort of published roadmap to guide their developers, just as proprietary software projects do. In the case of Wikipedia, all contributions are guided by the “founding principles” (Wikimedia contributors 2009), that define the organizational and structural policies. The most important of these is probably the principle of “neutral point of view”, which restricts contributions to only objectively verifiable, documented facts. Thus, the encyclopedia is one that does not permit personal opinions (though documented statements of opinions and positions are welcome). Although the system is not perfect, it does permit people with diametrically opposed opinions to collaborate on an article by agreeing on objective facts. The point here is not whether or not Wikipedia successfully achieves its goal of neutrality; it is that having such a defined goal provides an organizational structure under which hundreds of people can contribute to the same encyclopedia article, which is not the case for artistic works. Beyond software and encyclopedias, other objectively-evaluated works have objective criteria which permit common collaboration while minimizing arguments on what should be added or not. For example, open textbooks and educational materials include documented knowledge and they follow curriculum standards; open dictionaries include documented usages of words and phrases. As we have noted, objective works permit collaboration in much larger projects than do subjectively-evaluated works. While both categories permit modification and redistribution, the quality benefits of having many concurrent collaborators apply mainly to objectively-evaluated works. While researchers may study either category, those more interested in the communal aspects of collaborative development would generally find more activity in the creation of objectively-evaluated open content.

Four categories of open content Based on these two dimensions, we have four categories of works. Although these categories apply to all kinds of works, we will discuss them particularly in the context of open content development. Table 1 summarizes the categories with some examples and comments related to their development.

12

Relativist Universalist

Truth perspective

Table 1. Categories of works with open content licensing implications Value assessment Objective Subjective Category Utilitarian Aesthetic name Software, recipes, how-to Fine art, music, literary fiction, manuals, engineering designs, poetry, song lyrics, nonExamples architectural blueprints, documentary films, photographs, taxonomies, typologies drama, games Large number of contributors; Contributions in small teams; low incentives for businesses to Open content high incentives for businesses to contribute; projects with high contribute; projects with very low development modularity and large granularity characteristics modularity and usually large granularity BY, BY-SA, BY-NC, BY-NC- BY, BY-SA, BY-ND, BY-NC, Preferred SA BY-NC-SA, BY-NC-ND licenses Category Factual Opinioned name Textbooks, dictionaries, Essays, editorials, blogs, encyclopedias, maps, news commentaries, scholarly articles, Examples reports, historical records, product reviews, religious and educational materials philosophical texts Large number of contributors; Contributions in very small teams; low incentives for businesses to few examples of OC development; projects with high modularity; Open content contribute; numerous applications of OC granularity ranges widely from development characteristics development; projects with high very small to very large modularity and small granularity BY, BY-SA, BY-NC, BY-NC- BY-ND, BY-NC-ND Preferred SA licenses

Utilitarian works: Objectively-evaluated and relativistic First, we have utilitarian works, which are objectively evaluated according to how well they attain a relativistic value goal. Software (the subject of open source software) falls in this category, since a software program has a definite utility goal, and is judged based on how well it does the job. Even artoriented software such as drawing software (e.g. GIMP or Inkscape) or video-editing software (e.g. Kino or Blender) is utilitarian; it is not judged based on the aesthetic value of the resulting works; the software is judged on how well it enables artists to carry out their creative visions. Other examples in this category are cooking recipes (Foodista), how-to manuals (WikiHow), engineering designs (Appropedia), and taxonomies (WikiSpecies). These all share the characteristics of their value goal not being to achieve some sort of universalist “truth”, but rather an attempt to be valuable according to some relativistic criteria. Nonetheless, their evaluation of whether or not they attain these relativistic criteria is based on concrete objective criteria.

13

Factual works: Objectively-evaluated and universalist The second category is factual works, which are objectively evaluated primarily according to how universally true the work is. For such works, there are some absolute truth claims, and the works are evaluated according to these criteria. Wikipedia is in this category, as exemplified by its Neutral Point of View doctrine: the project explicitly forbids statements of opinion, and only authorizes content that can be externally, objectively documented. “Universal truth” in the case of Wikipedia refers to the existence of respectable citations external to Wikipedia of the statement in question. There is another important sense in which factual works are very closely linked to utilitarian ones. Our use of the term “factual” must be understood only in a relative sense; it does not imply that factual works are nothing more than a mere collection of facts. In order to be eligible for copyright protection, a work must incorporate a creative element beyond the mere presentation of facts, even if it be merely the particular arrangement and presentation of the facts that involves some creative consideration. Thus, all factual works necessarily include a utilitarian component regarding a valuable presentation of the facts. (Databases are a grey area, though, in this regard, for which reason some jurisdictions have established special database rights to grant them intellectual property protection without having to prove creative innovation.) Thus, the evaluation of their quality does not only involve evaluating the universalist veracity of the facts presented, but also a relativistic assessment of the usefulness of the chosen presentation. These notes notwithstanding, whereas the evaluation of the presentation is very similar to the evaluation of utilitarian works, the objective evaluation of universalist facts is a critical additional component that distinguishes works of this category from their utilitarian kin, which sometimes would lead to different appropriate decisions regarding the works. Of Stallman’s three categories, his first category of “functional works” encompasses both the utilitarian and factual categories we describe here. His grouping them as a unitary category might be for reasons similar to those we have just noted. Among his list of examples cited earlier, recipes and computer programs are utilitarian works, since their value is based on how well they attain some objective standard of being “valuable”. However, we distinguish documentation, textbooks (Free High School Science Texts), and reference works (Citizendium) as factual works, since their quality is based on reference to some external, universal standard of what is “true” or “factual”. These two categories have similar licensing implications, which is probably why Stallman considered them as a unitary category. Referring to what we call utilitarian and factual works, Stallman (2002, 143) says: “For all these functional works, I believe that the issues are basically the same as they are for software and the same conclusions apply. People should have the freedom even to publish a modified version because it’s very useful to modify functional works”. In other words, he believes that it is appropriate to apply a standard open content license, such as the GFDL, or CC-BY or CC-BY-SA, to such works. Cheliotis (2009) seems to agree that functional goods are generally best served by a CC-BY or a CC-BY-SA license. However, unlike Stallman, he also recognizes that some authors prefer to only permit non-commercial use, modification, or redistribution; thus, he includes CC-BY-NC and CC-BY-NC-SA as suggested licensing terms. Aesthetic works: Subjectively-evaluated and relativistic The third category we identify features aesthetic works (Stallman’s “aesthetic or entertaining works”), where beauty is in the eye of the beholder; these works are subjectively evaluated based on an evaluator’s relativistic preference of what is valuable or beautiful. This includes music (e.g. ccMixter and Kompoz), literary works of fiction, and fiction movies4.

4

http://www.blender.org/features-gallery/blender-open-projects/

14

Although aesthetic works are quite different in many ways from utilitarian works, there is a crucial similarity that must be noted. One of the greatest challenges in translating free and open source software licenses into terms that make sense for non-software works is the software concept of source code—human readable instructions from which the actual product (a collection of bits) is produced. On one hand, the source code is itself often not usable (except in the case of interpreted programming languages such as JavaScript and PHP) until converted into bits; on the other hand, the executable collection of bits is humanly incomprehensible without the underlying source code. In the case of open content novels or an encyclopedia like Wikipedia, there is no parallel to source code: once a reader receives the usable item (the text), there is no relevant “source code” without which the text cannot be modified. (It could be argued that Wikipedia depends on underlying wiki code, and an e-book might depend on underlying HTML, but neither is necessary for users to make effective modifications to the text.) However, music and film, although they are aesthetic works, have similar challenges as do software. The useful product is a stream of sounds or a combined stream of video and sound. Although the music or video could be clipped and rearranged to compose new works (which cannot be done with binary software), a musician or filmmaker is severely limited in their ability to modify a song or a film without the underlying musical scores or raw video footage. Similarly, a graphic artist is hampered in their ability to modify a graphic image without the underlying layered image file (such as Photoshop PSD or Gimp XCF). These file formats that digitize musical scores (that is, sheet music) or image lagers could be considered the equivalent of the “source code” for these genres of works. Concerning open content licensing of aesthetic works, even Stallman recognizes that there are significant complexities involved: Now for these works, the issue of modification is a very difficult one because on the one hand, there is the idea that these works reflect the vision of an artist and to change them is to mess up that vision. On the other hand, you have the fact that there is the folk process, where a sequence of people modifying a work can sometimes produce a result that is extremely rich. ... It’s a hard question what we should do about publishing modified versions of an aesthetic or an artistic work, and we might have to look for further subdivisions of the category in order to solve this problem. (Stallman 2002, 144) Cheliotis (2009) argues that for aesthetic works (which he calls “cultural goods”), the copyleft provision is not as meaningful: even though an artist may want to permit modification and redistribution of his or her work, it is often not meaningful to reincorporate these modifications into the original work. The artist would usually be content with receiving attribution as the original source of the aesthetic idea, but would generally want to retain the original work intact. Moreover, Cheliotis further argues that the non-commercial restriction is more meaningful for aesthetic works because they are more often exploited without modification than are utilitarian or factual works. A commercial exploiter’s options are limited if a utilitarian or factual work is protected by a copyleft provision; in this case, although they might exploit the work commercially, they would be required to provide any modifications they might make for free, thus limiting any possibilities of monopolizing someone else’s work. On the other hand, it is easier for a marketing organization to exploit an aesthetic work better than the artist can themselves; the non-commercial provision assures that the artist shares in the profits. As a result of these complexities, we would expect to see the widest range of licensing options in place with aesthetic works: any of the eight Creative Commons licenses might make sense, as well as the FDL. Opinioned works: Subjectively-evaluated and universalist Finally, we have opinioned works, which make universalistic claims, but such claims are understood to be subjective without an inordinate attempt to objectively evaluate such claims. These include essays, editorials, blogs, comments on blogs, commentaries, scholarly publications, product reviews, religious and philosophical texts. The common theme with these kinds of works is that 15

although they put forth theses or statements that can only be evaluated subjectively, it is of great importance that the work be presented accurately as a faithful representation of the author’s beliefs or opinions. It is somewhat non-intuitive that we consider essays, political theses, and religious texts as universalist works. This is certainly not because we hold their contents to be universally true, nor necessarily even based on whether the author considers the contents as universally true. Rather, we are focused on a very particular universalist truth claim: whether or not the work is an accurate expression of the author’s personal opinion, view or perspective. On one hand, the works are subjectivelyevaluated: it is up to the author, or those who agree with the author, to determine whether they agree with or like the work. On the other hand, the truth or validity or reliability of the work is not based on the relativist criteria of if the work is useful or beautiful; it is based on the criteria of fidelity to the author’s actual beliefs or opinions. From another perspective, an author is only satisfied that the work is completed when it accurately and fairly (universalist criteria) expresses what he or she truly feels, thinks or believes (subjective evaluation). Whereas scholarly publications can easily be seen as universalist works, it might not be so obvious why we consider them subjectively-evaluated, and thus opinioned, works. To understand this classification, it is important to distinguish a scientific discovery as an abstract concept from a concrete article that reports the discovery. A scientific discovery is not a fixed expression of a work; ideas and general knowledge are not copyrightable, not even by the person who originates or discovers them. (They might, however, be subject to patents.) Only the article that records and reports the discovery is a work, a fixed expression of the scientific knowledge. Thus, while the discovery itself is an objectivelyevaluated universalist (thus factual) concept, such as could be described in Wikipedia, the article that originally reports it is an expression of the author’s chosen interpretation and mode of presentation— both of which are subjective expressions. Another illustration of this distinction is seen in the move towards “open data”5, a factual work, which attempts to release the factual scientific data via the least restrictive open content licenses possible (even public domain dedication for unrestricted reanalysis and repurposing), as distinct from the “open access” movement, which aims to permit redistribution of scholarly articles, which might or might not permit modification of these opinioned works. Perhaps the clearest demonstration that scholarly publications are opinioned works is seen in the scholarly peer-review process. Peer-review has two primary criteria to determine if a scholarly submission ought to be published. The first is whether the article is accurate and trustworthy in what it claims to offer as knowledge; this is clearly an objective and universalist evaluation. However, the second criterion is whether the submission is a valuable contribution to the community of scholars that would read the publication source. As is evident to any scholar who has participated in the peer-review process, this is quite a subjective criterion.6 One interesting implication of this classification is the suggestion of one reason why Citizendium will not likely succeed as Wikipedia has. Citizendium is an open content encyclopedia set up by Larry Sanger, one of Wikipedia’s co-founders, in protest to the free-for-all that he perceives Wikipedia to have become. The key difference between the two is that Citizendium gives special editorial privileges to credentialed experts in approving what is valid content. Whereas most such “editors”, as they are called, might permit a broad range of presentation styles, this Citizendium policy might lead to articles that are controlled by the interpretation of the designated editor. This introduction of a subjective evaluation (albeit “expert”) erodes the benefits of a factual open content project, and would likely stifle the benefits 5

See http://www.opendatafoundation.org/ and http://www.opendatacommons.org/ An interesting counter-example is seen in the journal PLOS One, whose reviewers are instructed to evaluate articles solely on factual criteria of accuracy and not on subjective criteria of value of contribution. (As an online-only journal—thus with no space restrictions— funded by author publication fees, the journal can financially afford such a policy.) We would consider articles published here to be factual works, according to our framework. 6

16

of crowdsourcing. As we argue later, opinioned works, of all categories of works, benefit the least from the collaborative development benefits of open content licensing. Although these opinioned works are mostly the expression of one individual, or of a very small number of individuals, one possible application that might have a more collaborative development model might be a work of textual criticism where the contributors attempt to compile the most accurate version of an ancient manuscript for which various, inconsistent copies exist. This is a significant field of study in classical studies, religion, and history; however, we are not aware of any project that currently employs an open content development model. Nonetheless, in such a scenario, the value of the work would remain the subjective evaluation of the original author’s perspective, but the validity or accuracy of the final collaborative work would depend on its theoretical fidelity to the missing original, following the well-developed textual criticism techniques—thus, even this example might be more appropriately classified as a factual work: the lost original might indeed be an opinioned work, but its reconstruction would be a factual work. A related project, BibleWiki, expressly avoids opinions by restricting its scope to textual exegisis of the Bible (contextual literary meaning), adopting a Wikipediastyle “neutral point of view” approach; it thus attempts to produce a factual work, rather than an opinioned one. This identification and categorization of opinioned has important implications for their open content development and corresponding licenses—in this case, they are most likely to be licensed in ways that forbid open content development.u Stallman (2002, 142) argues: “The whole point of those works is that they tell you what somebody thinks or what somebody saw or what somebody believes. To modify them is to misrepresent the authors; so modifying these works is not a socially useful activity. And so verbatim copying is the only thing that people really need to be allowed to do”. Thus, even Richard Stallman, the originator of the copyleft legal concept, does not apply copyleft to his own essays, nor does he even permit derivatives; he only permits verbatim copying and redistribution. His typical license for his essays is something to the effect of: “Permission is granted to make and distribute verbatim copies of this book provided the copyright notice and this permission notice are preserved on all copies” (Stallman 2009). The equivalent Creative Commons licenses are CC-BY-ND or CC-BY-NCND. Comparison with other classifications Our classification is a finer refinement than that of Stallman and of Cheliotis, providing a clear base for the implications that they draw in their enlightening articles. As we have noted, our classification largely corresponds with Stallman’s, except that we clearly explain two dimensions that distinguish our four categories; as a result, we subdivide Stallman’s “functional works” into utilitarian and factual works; our aesthetic and opinioned categories correspond directly to his other two categories. Cheliotis’ “functional goods” correspond roughly to our objectively-evaluated category (that is, both utilitarian and factual works), whereas his “cultural goods” seems to most closely correspond to our aesthetic category. Although he made a passing reference to opinioned works (“documents expressing diverse personal opinions or political views,” 2009, 231), it is unclear where he classifies them in his dual scheme. However, Cheliotis indicated some uncertainty about how to classify certain products under his dual scheme, as the following sentence indicates: “... content of primarily functional or educational value, like Wikipedia’s encyclopedic pages; though educational content in particular poses some challenges in terms of our comparison as it is not a tool per se but can be said to have both functional and cultural utility)” (2009, 234); this uncertainty is highlighted by his earlier categorization in the same article of Wikipedia as a cultural work (“...goods of cultural or entertainment value, as exemplified by the hugely successful Wikipedia, Flickr, YouTube and many more online communities”) (2009, 229). We definitely classify Wikipedia as a factual work, that is, as an objectively-evaluated (Cheliotis’ functional category) universalist work; educational works are generally in the same category. 17

We opt to use the term “objectively-evaluated works” instead of Cheliotis’ “functional goods”, as the word “functional” is very similar to the idea behind “utilitarian”, yet does not make the important distinction between a relativistic evaluation of the value or quality of the work (in the case of utilitarian works) versus a universalistic evaluation (in the case of factual works). Moreover, we avoid his term “cultural goods” because “cultural” is a sufficiently broad and imprecise word that it could justifiably refer to any kind of work; indeed, this is the reasoning behind the word’s use in the term “Free Cultural Works”. Classification of composite works Although we have developed a categorization system that encompasses all kinds of works, these works do not always fit so neatly within the four categories. Some works by their very nature combine characteristics from multiple categories at the same time. For example, a documentary film primarily contains content that could either be mainly factual or mainly opinioned, or perhaps a combination of both. (Indeed, the authors of many opinioned works consider them to be factual! However, an author’s unwillingness to submit their work for revision, whether through traditional or contemporary Internetbased means, is an indication of their work’s opinioned nature: the author does not want to be misrepresented—a universalistic quality criterion.) However, the accompanying filmography and music would likely be mostly aesthetic. Another example would be an encyclopedia like Wikipedia: the textual content is mostly factual, yet it includes many images. These images would be mainly utilitarian (appropriate to support the accompanying text) or aesthetic (beautiful). And then the organization of the encyclopedia would be a utilitarian design. These two examples should be sufficient to highlight this complexity inherent in many works. It is beyond the scope of this article to flesh out all the details of this important complication, but we do want to highlight one important implication. We must note that these complications do not indicate that are categories are more appropriately represented as a spectrum where some kinds of works are more one item than another; rather we maintain that the categories are discrete and distinct, even when the works themselves are complicated. Consider the two examples we gave in the previous paragraph: A documentary film can be decomposed into its transcript (opinioned), its video footage (aesthetic), the soundtrack (aesthetic), and the composition of words, video and music (utilitarian). A Wikipedia article can be decomposed into its text (factual), images (aesthetic), and the composition, presentation and juxtaposition of text and images (utilitarian). This perspective of considering the complicated components as discrete rather than as spectral permits decomposition of elements, perhaps with different licenses for each discrete component. This is seen in Wikipedia in the distinct license regimes between the text (Wikipedia’s CC-BY-SA) and the images and media (Wikimedia Commons, with item-by-item licensing and fair use exploits). Another example is seen in video games, another work genre that defies simple categorization under our classification. However, when a video game is decomposed into the software that executes the game and the game itself as an entertainment idea fixed in a medium, then the video game engine (utilitarian) is more readily distinguished from the game itself (aesthetic). This decomposition leads to the very sensible case of sometimes licensing the game engine as OSS, while maintaining the game itself under all-rights-reserved copyright (there are numerous specific examples of just such a decomposition7. When applying this framework to study specific works, it would be helpful for researchers to clearly focus on what sub-element of a work they are studying, and consider the particular characteristics of that element. If they cannot so easily separate these elements in their evaluation (for example, in the case of a documentary film), then the confounds should be noted, and hypotheses should be adapted accordingly.

7

http://en.wikipedia.org/wiki/List_of_open-source_video_games#Free_engine.2C_proprietary_content

18

Implications of research on open source software for open content An important application of our framework is for developing theoretically-grounded propositions. Though there is an abundance of OSS research (especially in the software engineering literature); we do not attempt to conduct an exhaustive review of the literature here. Rather, we highlight key findings from some of the leading information systems journals, which tend to be more theoryoriented in that they select topics of inquiry and research approaches that facilitate drawing wider implications to similar and related phenomena—in our case, open content. We searched the abstracts of six leading IS journals (European Journal of Information Systems, Information Systems Journal, Information Systems Research, Journal of AIS, Journal of MIS, and MIS Quarterly) as of October 2010 for the binary search string < "open source" OR "free software" OR FOSS OR FLOSS >. (FOSS is an abbreviation for “free and open source software”, and FLOSS for “free/libre/open source software”; both are widely used synonyms for the phenomenon.) Our search resulted in 27 articles. In all of these cases, note that open source software is a utilitarian work. Thus, many of the findings might have direct implications for other utilitarian works, such as how-to manuals and recipes; we also note some possible implications for other categories of open content. Table 2 summarizes these implications, demonstrating that although open source software findings are often applicable to open content, they are neither indiscriminately applicable nor are they equally applicable to all categories of open content. We do note, though, that these findings are generally note applicable to the various categories of freely redistributable works that we listed in the introduction as being outside the scope of our definition of open content. Our focus in this article is works that are licensed so as to permit unrestricted contribution and collaboration (including possibly “forking” a project, that is, starting a new separate project from a copy of the original); works that are not licensed for such free contribution and collaboration do not benefit from the implications we identify here from OSS research. Table 2. Implications of OSS research on open content research

U

F

OSS is increasingly used and developed by SMEs (Lundell, Lings and Lindqvist 2010)

H

H M M

software companies must gradually re-orient their business models towards more profitable activities (Vitari and Ravarini 2009)

H

H M M

The quality of the proprietary software decreases as the quality of OSS increases (Jaisingh, See-To and Tam 2008)

H

H M M

In software markets with high network effects, a proprietary software offering can survive only if it is more usable than the competing commercial version of the OSS (Sen 2007)

H

H M M

Shared macroculture and collective sanctions facilitates the coordination of exchanges in open source service networks; collective sanctions facilitate the safeguarding of exchanges (Feller et al. 2008)

?

?

?

?

A software firm can benefit from giving away software due to the accelerated diffusion process and increased net present value of future sale (Jiang and Sarkar, 2009).

H

N

H

N

Opensourcer must not seek to dominate the process; must provide business expertise; must help establish trusted ecosystem. OSS community must have democratic authority structure; must have responsible and innovative attitude; must help establish sustainable ecosystem. (Ågerfalk and Fitzgerald, 2008)

H

H

N

N

OSS has been transformed to a commercially viable business strategy (Fitzgerald, 2006)

H

N

H

N

In presence of OSS alternative, willingness to pay for proprietary software dropped (Raghu et al., 2009)

H

H

N

N

There is no overall difference in terms of evaluation criteria for proprietary or open source

H

H

N

N

Compe ting prop. works

Commercializatio n strategies

Industry effects

OSS literature review findings

Implications for open content

A

O

19

Project membership

Work quality

Project success

OC communities and practices

enterprise application software; implementation factors such as ease of implementation and support are much more crucial in the evaluation of OSS enterprise application software (Benlian and Hess, 2010) OSS communities are combinations of gift giving and scientific knowledge sharing cultures. Highly skilled programmers collectively develop software. Loosely coupled communities kept together by strong common values in line with the hacker culture. (Ljunberg, 2000)

H

H

H

H

OSS communities rely on a gift giving culture where ideas and products freely circulate. The giver is given a power from giving away that guarantee the quality of the released ideas or products. Quality is assessed through peer review. (Bergquist and Ljungberg, 2001)

H

H

H

H

Framework to develop hybrid-OSS communities for software development relying on of community building, governance, and infrastructure. (Sharma, 2002)

H

H

H

H

Using an incremental life cycle is highly motivating and supports learning. The incremental approach may raise difficulties during development of complex new features. (Jorgensen, 2001)

H M M N

Only a small core of developers is responsible for most project outputs. Only a small number of programmers work together on a same file. (Koch and Schneider, 2002)

H

H

N

N

Affective trust positively related to team size and effort; cognitive trust was not; freedom ideology negatively related to cognitive trust; team effort and communication quality positively related to task completion; team size was not. (Stewart and Gosain, 2006)

H

H

H

N

OSS project effectiveness (in terms of team size, team effort and team’s level of completion) is affected by expertise integration. Expertise integration was found to have an impact on team size and team effort, which in turn were found to jointly influence task completion. (Chou and He, 2010)

H M M N

OSS vendors released patches faster (Arora et al., 2008)

H

H

N

N

OSS peer review is a means to ensure better bug discovery and better bug solving flexibility. The possibility to integrate additional OSS security functionalities enables high software security. But binary-only programs are less vulnerable and add peer review provides visibility to potential attackers. (Payne, 2002)

H

H

H

H

The quality of code produced by OSS is lower than that which is expected by industrial standards. To a certain extent, the average component size of an application facilitates OSS development and is also negatively related to user satisfaction. (Stamelos et al., 2002)

H

H

N

N

Developers prefer joining a project when they have past relationships with initiator, and other members are more experienced. (Hahn et al., 2008)

H

N

H

N

Non-restrictive OSS licenses and sponsored projects attract greater user interest; non-market sponsors attract greater development activity; restrictive licenses did not attract greater development activity. (Stewart et al., 2006)

H

H

H

N

Intrinsic motivation of challenge (problem solving) is associated with the developers’ preference for licenses with moderate restrictions, while the extrinsic motivation of status (through peer recognition) is associated with developers’ preference for licenses with least restrictions. (Sen et al., 2008)

H

H M N

Sustained participation is associated with the co-evolution of situated learning and identity construction. Initial conditions to participate do not predict long-term participation. (Fang and Neufled, 2009).

H

N

H

N

Key: H = Highly applicable; M = Moderately applicable; N = Not applicable; ? = Uncertain applicability

Industry effects A number of studies have investigated the effects of OSS on the software industry. Lundell et al (2010) found that OSS products are increasingly used and developed by small and medium enterprises (SMEs). Vitari and Ravarini (2009) concluded that software companies must gradually re-orient their business models towards more profitable activities. Jaisingh et al (2008) found that the quality of the proprietary software decreases as the quality of OSS increases. Sen (2007) found that in software 20

markets with high network effects, a proprietary software offering can survive only if it is more usable than the competing commercial version of the OSS. Feller et al (2008) found that in open source service networks, the facilitating of coordination and the safeguarding of exchanges between commercial vendors and the OSS development community enables access to, and transfer of, strategic resources. A shared macroculture and collective sanctions facilitates the coordination of exchanges; collective sanctions facilitates the safeguarding of exchanges. These studies address a general message that is applicable to content providers of the four categories. The radical change that is occurring in the software industry has already reached beyond the scope of the software world to any digitally-enabled information-based product creation and dissemination. Nonetheless, these findings particularly apply to works in which quality is improved by an increase in the number of contributors, which is usually the case of utilitarian and factual works; while they are also applicable to the respective industries of aesthetic and opinioned works, this is to a lesser extent than for the objectively-evaluated categories. However, Feller et al’s (2008) study is very particular to the dynamics of the software industry. Because of the nascent nature of open content, there are no open content service networks currently in existent that are comparable to the subjects of Feller et al’s study. Because the dynamics of any such future exchanges are highly idiosyncratic to the nature of the work (Feller et al studied networks within the software development industry), we don’t believe that we could generalize their findings to other categories of open content, nor even to non-software utilitarian works, for that matter. Commercialization strategies For the long-term sustainability of any open content product, there must be a sound business model underlying it. Whereas some like Wikipedia are funded by a not-for-profit foundation supported by donations and grants, a commercial revenue strategy is the most viable means for the vast majority of open content projects. Similarly, OSS is sometimes funded by not-for-profit foundations (such as the Mozilla Foundation for Firefox and Thunderbird), but the most successful products are more often supported by aligning their interests with profit-oriented enterprises (such as Linux’s links with Red Hat and IBM, MySQL’s purchase and commercialization by Oracle, and Google’s development and release of Chrome). Research into how OSS has succeeded in commercialization is valuable for the long-term viability of open content projects. Fitzgerald (2006) observed the general process how OSS has been transformed to a commercially viable business strategy. While his findings are largely idiosyncratic to OSS because of its maturity, some other categories of open content could be expected to eventually attain such maturation experiences. Other utilitarian works such as how-to-manuals and engineering designs could be conceivably developed to a point where business find profit opportunities in their development. We also believe that there are some fledging moves in this direction with some aesthetic works, as can be seen in Magnatune and Jamendo’s attempts to simultaneously sell and distribute Creative Commons-licensed music. However, factual works might find difficulty in finding commercial opportunities because universalist works are not as readily subject to the monopolistic competition of relativist works. Moreover, it is likely that the more valuable opinioned works would be less likely to be distributed under open content licenses that would deprive them of their commercial potential. In studying “opensourcing”, a deliberate strategy of releasing a previously proprietary product as open source, Ågerfalk and Fitzgerald (2008) observed that, for a successful relationship, the opensourcer (“customer”, the company that implements opensourcing) must not seek to dominate and control the process; must provide professional management and business expertise; and must help establish an open and trusted ecosystem. Correspondingly, the OSS community must have a clear and democratic authority structure with transparent processes; must have a responsible and innovative attitude; and must help establish a professional and sustainable ecosystem. Although there are numerous cases in the Internet age of content providers releasing their work as gratis content, we don’t know of any case other 21

than with software of their releasing it for public open content development. We don’t consider this very meaningful for subjectively-evaluated (aesthetic and opinioned) works, for two reasons: First, such works are typically created with small teams where the subjective evaluation criteria can be more easily agreed upon; thus, it doesn’t make sense to release them to a large community for development. Secondly, the creation of these works tend to be finite projects that have a definite ending with a limited number of revisions. Thus, a content owner would not need to develop a long-term relationship with the public. In contrast, even though there are no current examples, we would consider that the “opensourcing” model could be effective for both utilitarian and factual works, where crowdsourcing of large segments of the public could result in higher quality works—it would be worthwhile to expend the effort to build good relationships with the public. Jiang and Sarkar (2009) found that a software firm can benefit from giving away software due to the accelerated diffusion process and increased net present value of future sale. The success of works that answer a certain user need at a certain point of time (such as utilitarian works) or else that pertain to artistic domains and are then susceptible to fashion effects (aesthetic works), tend to depend on the initial speed of their spread. Such products then become accepted standards among users not because they are of better quality than other similar products but rather because of network externality factors. Similar implications can be drawn for factual works but to a lesser extent. Opinioned works seem to be less affected by diffusion speed. Effects on competing proprietary works Raghu et al (2009) found that when OpenOffice.org was presented as an alternative to Microsoft Office, students in an experiment were less willing to pay for Microsoft Office. This important finding indicates that in the presence of open content utilitarian alternatives, proprietary works might diminish in market value. The drop in sales of general reference works in light of gratis content available on the web (both Free Cultural Works and open access) indicates that this finding is very applicable to factual works as well. However, because of the idiosyncratic and subjective nature of aesthetic and opinioned works, we do not expect this finding to have any bearing on these other two categories of works. Benlian and Hess (2010) compared the use of evaluation criteria by companies to select proprietary or OSS enterprise application software and found significant differences. These findings indicate that in the presence of open content utilitarian alternatives, proprietary works might diminish in market value. The drop in sales of general reference works in light of gratis content available on the webindicates that this finding is very applicable to factual works as well. However, because of the idiosyncratic and subjective nature of aesthetic and opinioned works, we do not expect this finding to have any bearing on these other two categories of works. Open content production communities and practices Ljunberg (2000) observed that OSS communities are combinations of gift-giving and scientificknowledge-sharing cultures. Highly skilled programmers collectively develop software, and the loosely coupled communities are kept together by strong common values in line with the hacker culture. Bergquist and Ljungberg (2001) similarly found that OSS communities rely on a gift-giving culture where ideas and products freely circulate. The giver is given a power from giving away that guarantees the quality of the released ideas or products. Quality is assessed through peer review. Sharma (2002) presented a framework for developing hybrid-OSS communities for software development relying on of community building, governance, and infrastructure. In the studies, the notion of development community is particularly relevant to open content initiatives which involve a high number of contributors such as utilitarian and factual works. Moreover, the importance of peer review as a means to efficiently assess content quality is particularly crucial in objectively-evaluated works where quality and excellence standards can be agreed upon. Thus, these studies have implications for all categories of open content development. 22

Jorgensen (2001) found that using an incremental life cycle is highly motivating and supports learning, though it may raise difficulties during development of complex new features. This is applicable to utilitarian works and to a lesser extent to factual and aesthetic works in which learning can also be one of the main motives for individuals to participate. In a study investigating the characteristics of the GNOME project, Koch and Schneider (2002) found that only a small core of developers were responsible for most of the project outputs. This is relevant to works that manifest a certain degree of modularity and a large number of contributors such as is usually the case for utilitarian and factual works. Project success In investigating OSS project success factors, Stewart and Gosain (2006) found that affective trust was positively related to OSS team size and effort; cognitive trust was not. Various aspects of OSS ideology were positively related to both cognitive and affective trust; however, freedom beliefs were negatively related to cognitive trust. Team effort and communication quality was positively related to task completion, but team size was not. Collaborative OSS values were positively related to communication quality. Because these findings relate to the operation of teams once they are formed, we believe they would be generally applicable for all categories except for opinioned works, where most of the work done is either individual or in very small teams. Because aesthetic works generally operate with much smaller teams than do utilitarian and factual works, we don’t expect Stewart and Gosain’s findings related to team size to be applicable; however, the other findings should be. Gallivan (2001) minimized the importance of trust in OSS communities and found that OSS communities rely on other forms of social control and self-control; these findings may characterize any open content community involving a large number of individuals. Chou and He (2010) developed a theoretical framework to investigate how OSS project effectiveness (in terms of team size, team effort and team’s level of completion) is affected by expertise integration. Expertise integration was found to have an impact on team size and team effort, which in turn were found to jointly influence task completion. We can apply such findings to utilitarian and factual works, and to a lesser degree aesthetic works. Quality of works Arora et al (2008) found that OSS vendors released software patches for vulnerabilities faster than do proprietary vendors. A vulnerability is an objectively-determined fault in the software. For open content, this would be relevant to utilitarian works (such as software and how-to manuals), where errors result in undesirable functionality, and factual works, where errors might involve factual errors, privacy violations, or publishing of illegal information (such as posting child pornography or anti-government rhetoric, depending on the legal jurisdiction). The implication is that perhaps open content providers are more responsive to repairing errors than are proprietary content providers. However, given the small project teams for aesthetic and opinioned works, we don’t believe an open content model would make a difference. The notion of vulnerability was also investigated by Payne (2002) who compared the relative security between proprietary and open source software. On one hand, for objectively-evaluated works, the openness enhances quality. On the other hand, for subjectively-evaluated works, the openness tends to increase controversy. Thus, these findings have implications, albeit in opposite directions, for all categories of open content. Stamelos et al. (2002) found that the quality of code produced by 100 Linux applications is lower than that which is expected by industrial standards; this is mainly relevant for utilitarian and factual works.

23

Project membership Hahn et al (2008) found that “OSS developers prefer joining a project whose initiator has developed a strong tie with them through repeated collaborations and shared administration of past projects. They are also more likely to join a project whose noninitiators have more project experience” (2008, 385–386). In addition to being applicable to other utilitarian works, we expect this finding to further apply to aesthetic works. Because these two categories are relativistic in their value criteria, it is important that collaborators have a common vision of the goal of the project; past collaborations are a primary way of achieving such a shared vision. However, this finding is probably not applicable to the two universalistic categories, for opposite reasons. For factual works, the objective evaluation criteria substantially reduces the need for close interpersonal relationships in order to productively build towards the common goal of accumulating facts. For opinioned works, if people collaborate at all, it would only be with those with whom they have very close relationships; we don’t believe that an open content model would change this dynamic in any way. In a study of what factors attract developers to participate in new OSS projects, Stewart et al (2006) found that non-restrictive OSS licenses attract greater user interest; sponsored projects attract greater user interest, and more so for nonmarket sponsors; nonmarket sponsors attract greater development activity; however, restrictive licenses did not attract greater development activity. In addition to being applicable to other utilitarian works, we expect these findings to further apply to aesthetic works, because it is important that collaborators have a common vision of the goal of the project. However, as with our implications from Hahn et al’s (2008) study also related to the joining of projects, we do not expect that an open content model would have any effect on project joining for opinioned works. Sen et al. (2008) found that intrinsic motivation of challenge (problem solving) is associated with the developers’ preference for licenses with moderate restrictions, while the extrinsic motivation of status (through peer recognition) is associated with developers’ preference for licenses with least restrictions. Fang and Neufled (2009) found that sustained participation is associated with the coevolution of situated learning and identity construction; initial conditions to participate do not predict long-term participation. We believe these findings would hold for most categories of works: mostly to utilitarian and factual; and to a lesser extent for aesthetic works, because we predict that this category has generally smaller team sizes than do the other categories.

Research agenda for open content research Beyond deriving implications from OSS research for open content research, we want to develop a comprehensive research agenda. Towards this agenda, we first reorganize Table 2 to group the findings according to which OSS studies have implications for which open content categories; we accomplish this reorganization in Table 3. Table 3. Proposed implications of some OSS research findings for open content Key findings from OSS research Implications for all four open content categories OSS peer review is a means to ensure better bug discovery and better bug solving flexibility. The possibility to integrate additional OSS security functionalities enables high software security. But binary-only programs are less vulnerable and add peer review provides visibility to potential attackers. (Payne, 2002) OSS communities are combinations of gift giving and scientific knowledge sharing cultures. Highly skilled programmers collectively develop software. Loosely coupled communities kept together by strong common values in line with the hacker culture. (Ljunberg, 2000) OSS communities rely on a gift giving culture where ideas and products freely circulate. The giver is given a power from giving away that guarantee the quality of the released ideas or products. Quality is assessed through

Applies L A U F A O

A H H H H

G H H H H G H H H H

24

peer review. (Bergquist and Ljungberg, 2001) Framework to develop hybrid-OSS communities for software development relying on of community building, governance, and infrastructure. (Sharma, 2002) OSS is increasingly used and developed by SMEs (Lundell et al. 2010) Software companies must gradually re-orient their business models towards more profitable activities (Vitari & Ravarini 2009) The quality of the proprietary software decreases as the quality of OSS increases (Jaisingh et al. 2008) In software markets with high network effects, a proprietary software offering can survive only if it is more usable than the competing commercial version of the OSS (Sen 2007) Implications for utilitarian, factual and aesthetic works, but not for opinioned Non-restrictive OSS licenses and sponsored projects attract greater user interest. (Stewart et al., 2006) Sustained participation is associated with the co-evolution of situated learning and identity construction. Initial conditions to participate do not predict long-term participation. (Fang and Neufled, 2009). Using an incremental life cycle is highly motivating and supports learning. The incremental approach may raise difficulties during development of complex new features. (Jorgensen, 2001) Affective trust positively related to team size and effort; cognitive trust was not; freedom ideology negatively related to cognitive trust; team effort and communication quality positively related to task completion; team size was not. (Stewart and Gosain, 2006) OSS project effectiveness (in terms of team size, team effort and team’s level of completion) is affected by expertise integration. Expertise integration was found to have an impact on team size and team effort, which in turn were found to jointly influence task completion. (Chou and He, 2010) A software firm can benefit from giving away software due to the accelerated diffusion process and increased net present value of future sale (Jiang and Sarkar, 2009). Implications for utilitarian and factual works only OSS vendors released patches faster (Arora et al., 2008) The quality of code produced by OSS is lower than that which is expected by industrial standards. To a certain extent, the average component size of an application facilitates OSS development and is also negatively related to user satisfaction. (Stamelos et al., 2002) In presence of OSS alternative, willingness to pay for proprietary software dropped (Raghu et al., 2009) Non-market sponsors attract greater development activity; restrictive licenses did not attract greater development activity. (Stewart et al., 2006) Only a small core of developers is responsible for most project outputs. Only a small number of programmers work together on a same file. (Koch and Schneider, 2002) Trust is not that important, OSS communities rely on forms of social control and self-control. (Gallivan, 2001) Opensourcer must not seek to dominate the process; must provide business expertise; must help establish trusted ecosystem. OSS community must have democratic authority structure; must have responsible and innovative attitude; must help establish sustainable ecosystem. (Ågerfalk and Fitzgerald, 2008) There is no overall difference in terms of evaluation criteria for proprietary or open source enterprise application software; implementation factors such as ease of implementation and support are much more crucial in the evaluation of OSS enterprise application software (Benlian and Hess, 2010) Implications for utilitarian and aesthetic works only Intrinsic motivation of challenge (problem solving) is associated with the developers’ preference for licenses with moderate restrictions, while the extrinsic motivation of status (through peer recognition) is associated with developers’ preference for licenses with least restrictions. (Sen et al., 2008) Developers prefer joining a project when they have past relationships with initiator, and other members are more experienced. (Hahn et al., 2008) OSS has been transformed to a commercially viable business strategy (Fitzgerald, 2006) Uncertain as to applicability of implications beyond software Shared macroculture and collective sanctions facilitates the coordination of exchanges in open source service networks; collective sanctions facilitate the safeguarding of exchanges (Feller et al. 2008)

G H H H H O H H M M O H H M M O H H M M O H H M M I H H H N I H H M N G H M M N G H H H N

G H H M N O H M H N A H H N N A H H N N I H H N N G H H N N G H H N N G H H N N O H H N N

O H H N N

I H N H N I H N H N O H N H N O ?

? ? ?

We then classified the reviewed OSS articles according to the following five distinct levels of analysis: software artefact, individual, team/project/community, organization, and society (Niederman et al. 2006), though no article in our sample of 27 adopted a societal perspective. Since we restricted our examination to just a few leading IS journals, we do not attempt to make any inferences about the total number of articles with implications for open content. For example, had we searched software engineering journals and conference proceedings, we would surely have identified far more articles where the software artefact and where the developer as an individual would be the primary level of analysis, probably with a different distribution of implications from what we have found here. Nonetheless, we expect our IS sample to be more amenable for deriving theoretical implications. 25

We note, however, that we categorized the 27 articles only into the first four levels of analysis, as no article in our sample studied OSS from a societal perspective. This level of analysis would also pertain to issues related to the diffusion of the free software philosophy to non-software areas that involve the licensing of intellectual property, which addresses the open content phenomena itself. Based on the findings generated by the literature review and using the framework we introduced earlier, we have derived a research agenda (see Table 4). According to open content work category and level of analysis, the agenda suggests a sample of relevant research topics in an attempt to provide some meaningful and sound directions for open content research. Table 4. Research agenda and research topics sample for open content research

Group /Project /Community

Individual

Artefact

All four open content categories

Utilitarian and aesthetic works only

Utilitarian, factual and aesthetic works only

Utilitarian and factual works only

 Influence of peer review practices on OC product quality  Impact of OC product visibility on open content product quality

 Relationship between OC product licence and product quality  Influence of an incremental life cycle for OC production on OC product quality

 Relationship between the degree of license restrictiveness of an OC work and its quality  Relationship between peer recognition mechanisms and OC product quality

 Relationship between OC product visibility and individual contribution quality  Influence of peer review on contributor performance in OC projects

 Relationship between OC product licence and contributor interest  Do sponsored OC projects attract greater contributor interest?  Applicability of legitimate peripheral participation theory to explain sustained participation in the OC context  Role of initial conditions in predicting long-term participation in OC projects

 Assessment of OC work quality using industrial standards  Determining specific standards to evaluate OC product quality  Differences of production patterns between OC and proprietary works  Influence of the average component size of an OC product on its development process  Impact of the existence of OC product alternatives on willingness of a consumer to pay for proprietary products  Relationship between average component size of an OC product and motivation to join an OC project

 Study of the shared cultural components that characterize OC communities  Study of the peer review mechanisms governing OC communities

 Impact of the incremental OC production approach on the motivation and learning of product contributors  Using a trust perspective, determining the factors affecting team size, team effort, and task completion in the OC context  Relationship between expertise integration and OC project effectiveness/ team size/team effort

 Organization and structure of OC communities  Evolution of the number of contributors working on the same OC product component  Role of trust-building mechanisms in OC production  Instances of social control and self-control mechanisms in OC production

 Study of the peer recognition mechanisms that govern OC communities  Impact of OC group characteristics on group effectiveness  Study of the problemsolving mechanisms that characterize OC production

 Relationship between contributor intrinsic/extrinsic motivations and the degree of license restrictiveness of an OC product  Influence of past relationships on joining an OC project  Influence of past experiences on joining an OC project

26

Organization

 Potential for SMEs to use/develop OC products  New forms of business models engendered by the spread of OC development  Quality difference between OC and proprietary products  Role played by network effects in markets where OC and proprietary products compete

 Benefits for a company to release a product under an OC license  Diffusion of a product released under an OC license by a company  Impact on net present value of future sales when releasing a product under an OC license

 Description of the different forms of ecosystem that govern OC production  Role of democratic authority structures in helping OC communities function and perform better  Determining evaluation criteria that can be used to evaluate an OC product solution

 Determining whether OC production can become commercially viable  Study of the different types of OC production-based business strategy

Theoretical background for specific research directions This research program draws from multiple research domains, some of which have already established an extensive body of literature over the years (i.e., open source software and marketing of digital products; for reviews, see (i.e., open source software and marketing of digital products; for reviews, see Peitz and Waelbroeck 2006; and Feller and Fitzgerald 2001), while others are emerging fields that have recently received considerable attention (i.e., Web 2.0, user-generated content, and wikis; see Kane and Fichman 2009). Here, we describe the theoretical approach of the studies on the quality outcomes of open content products and on the marketing outcomes of open content. In addition, we discuss some initial directions in research on open content in developing countries. The quality of open content products Guimaraes et al (2007) found that the quality of an information system positively affects its use, job impact on users, and the benefits they derive from the system. Although here we are concerned with information products rather than with information systems per se, this finding is nonetheless relevant since the open content development model is mostly directed at producing information products of a certain quality with the hope that consumers would use the products for their various goals. Because of the novelty of the area, there is relatively little literature on the quality of open content products in general (other than Wikipedia, which we discuss shortly). The main theoretical streams that have some bearing on this area include work on information systems quality. Among this, literature there some studies that might have some relevance to open content quality. Kahn et al (2002) have developed benchmarks for assessing information quality, which is one important aspect of information product quality. Jøsang et al (2007) synthesized a framework of trust and reputation systems that have been employed to evaluate and signal the quality of information products. Moe (2009) found that customer ratings and word-of-mouth affect the sales of good quality information products, but have little effect on sales of poor quality products (whose sales remain low regardless). Although related more specifically to website design quality, Cyr (2008) found that trust positively affects a site visitor’s intention to return to the website. For website-based open content such as Wikipedia, such trust indicates a key aspect of readers’ perception of the resource’s quality and usefulness. Not surprisingly, considering the recent nature of the field of research, the only scholarly work thus far conducted on the quality of non-software open content products has been the extensive body of research that has attempted to evaluate the reliability of Wikipedia, variously expressed as its trustworthiness, quality, or accuracy. In other words, why trust the contents of an encyclopedia that anyone can edit? We have already covered some of these studies in other work (Okoli 2009); we repeat our discussions here with a few additional comments. An early scholarly assessment of Wikipedia is a comparison of selected science articles in Wikipedia and Encyclopaedia Britannica conducted by Nature journal (Giles 2005). The study found Wikipedia’s accuracy comparable to those of Britannica. Among 42 articles, Wikipedia had an average of four errors each, and Britannica had three. 27

In spite of this early favourable Nature study, most of the studies of Wikipedia’s reliability have judged it unfavourably. Denning et al (2005) question whether Wikipedia’s collaborative editing process is capable of producing accurate and authoritative information on a thoroughly comprehensive scope of human knowledge. Gorman (2007) contends that Wikipedia has no basis to call itself an encyclopedia, and that without regulation of its article-writing process, its information is unreliable. Fiedler (2008) discusses a well-publicized case of a prank entry of John Seigenthaler, Sr., a famous journalist, implicating him in the assassinations of John and Robert Kennedy (Wikipedia contributors 2008a). This false information remained on Wikipedia for over four months before being discovered. It has become a strong warning against the weakness of a major public information source “that anyone can edit”. Svoboda (2006) discusses various criticisms that some scholars have levied against Wikipedia. They contend that in spite of the attempts of some Wikipedians to provide quality control, the lack of formal controls results in the lowest quality contributions prevailing, with unclear standards of accuracy or writing quality. Waters (2007) contends that Wikipedia articles are at best a sum of thousands of opinions; this process, he says, could not logically result in articles of quality. In contrast to the negative tone taken by many other studies, when Arter (2005) came across the article on “Auditing” and found it lacking information, he went ahead and filled in missing information to make the article more complete. This is in contrast to some other reviewers of Wikipedia who criticize its shortcomings rather than quickly applying the wiki concept to bring the encyclopedia more up to standard. Thus, by the end of his experience, Arter had no substantive criticism of the article on “Auditing”—this approach of resolving Wikipedia’s weaknesses by correcting them directly is the very heart of Wikipedia. Ironically, whereas most of the general review articles and those that assessed Wikipedia’s reliability have expressed strong doubts about its reliability, all the studies that have scrutinized Wikipedia from an epistemological perspective have strongly validated its epistemic qualities as a valuable source of knowledge (Schiltz, Truyen and Coppens 2007). Epistemology, the theory of knowledge, is “a branch of philosophy concerned with the nature and scope of knowledge” (Wikipedia contributors 2008b). A definite sub-stream of research has begun that explores Wikipedia as an epistemological phenomenon, examining how Wikipedia and related phenomena affect and shape people’s consciousness of how they know what they believe they know. This contrast in evaluation between classical views of knowledge and an epistemological re-evaluation suggests that Wikipedia represents a significant shift in how knowledge is evaluated and received, a shift of “seismic” proportions (Dede 2008). Fallis (2008) presented a compelling case for the reliability of Wikipedia. He highlighted the epistemic problems with comparing Wikipedia with some conceptually “absolute” sources of knowledge, such as direct evaluation by experts. He argued that it is rather more meaningful to judge the reliability of Wikipedia by comparing it with other encyclopedias such as Britannica. With such a criteria, he argued that Wikipedia has been repeatedly shown to be quite reliable (Giles 2005). Moreover, when compared to its more likely alternate sources on the Web, Wikipedia is strikingly superior as a source of knowledge (Magnus 2006). He argued that Wikipedia has important epistemological properties (“e.g., power, speed, and fecundity”) that offset its shortcomings, and thus is an important source of knowledge today. Dede (2008) considered Wikipedia as the epitome of Web 2.0, one of whose defining characteristics he considered to be “a shift from the presentation of material by Web site providers to the active co-construction of resources by communities of contributors”. He contrasted the epistemologies of the classical knowledge model of knowledge creation by experts with that of knowledge created by consensus of a community of contributors, proposing that various Web communities present epistemologies between these two extremes. Eijkman (2008) taking a perspective very similar to Dede’s (2008), argued that Web 2.0 presents a “non-foundational” model of learning and knowledge, in contrast to the classical “foundational” perspective. He argues that educators need to broaden their perspective on the epistemological basis of student learning, in order to appreciate the need and value of their being accultured into the emerging knowledge landscape that Web 2.0 presents. Matychak (2008) argued that 28

rather than nullifying the concept of truth, Web 2.0 applications like Wikipedia generates knowledge from groups of people spanning various social, economic, and geographic strata that were previously excluded from the knowledge market, providing a democratic landscape for knowledge creation and dissemination. Although they take a software engineering approach on the design of systems, Garud et al’s (2008) study on designing systems with a view to incomplete specifications has significant epistemological implications. They argue that when designing moving targets such as open source software and open content projects like Wikipedia, designers ought not insist on planning out a completed system; they need to target a moving target, a system which at any point in realization of its development is incomplete. The epistemological implication is that knowledge should not only be seen as the way to ascertain an absolute truth, but rather a process of discovery which is never completed. Wikipedia is quite amenable to this view of the nature of knowledge. These contrasting studies, along with others in a similar vein, provide base material for a systematic literature review on the reliability of Wikipedia, which is one of the main studies in this research program. Although Wikipedia as an encyclopedia is only one very particular kind of open content product, as the only non-software product whose quality has thus far been studied, it is important and relevant to rigorously synthesize the knowledge from this stream of research in order to draw up implications for other kinds of open content. Marketing of digital music For studying the market outcomes of open content, the only well-developed commercial marketplace so far is for digital music. Thus, we will study how open content licensing of digital music affects the market outcomes (revenue from sales and number of sales). The bulk of scholarly research on the marketing effects of digital music has focused on the effects of piracy (refer to Peitz and Waelbroeck 2006 for a review). For example, Sundararajan (2004) explored a seller’s options in digital rights management (DRM) controls and tiered pricing strategies in order to maximize revenues at varying degrees of piracy. In addition, Chellappa and Shivendu (2005) proposed pricing strategies for digital experience goods (such as music) where consumers don’t know how much they really value the good until they’ve sampled it (whether legally or illegally). Most musicians who have distributed their music through traditional means in the past continue to prefer digital distribution channels that control downloading and distribution through digital rights management (DRM). The piracy literature is of particular concern to these musicians, as the Internet has facilitated piracy of their digitized works in ways that have never before been so economically feasible. Some more recent studies focus more on alternative distribution models than on piracy per se. Sinha et al (2010) studied a recent trend of online music sellers to provide music free of DRM. They found that this strategy converts some potential pirates into paying customers. However, DRM-free music is not necessarily open content; indeed, their study does not concern open content at all, since piracy is still an issue. Doerr et al (2010) investigate the characteristics of streaming music services on customers’ perceived value from digital music and on their willingness to pay for it. However, streaming music is not open content; customers are not authorized to save the music, and certainly not to redistribute it. Iyengar (2010) compared various pricing plans for digital music, and found that customers derive greater value from à la carte pricing models than from subscription-based pricing. However, again, the music services he studied do not authorize redistribution of the music. In contrast , an increasing number of newer musicians are willing to distribute their work with licenses that authorize free downloading and redistribution, without guaranteed economic returns. Open content and Creative Commons licensing expressly obviates piracy by authorizing gratis downloading and redistribution of digital products. A number of online outlets have arisen to serve such musicians by helping consumers find and obtain their music. Magnatune is an online record label that only accepts 29

carefully screened music, but both sells it as a regular digital music distributor and makes it available for non-commercial use only under the Creative Commons Attribution-NonCommercial -ShareAlike license. Jamendo distributes digital music licensed by any Creative Commons license of the musician’s choice; it does not screen submissions. Like Magnatune, it provides full music licensing services, depending on the license terms. The Association Musique Libre is a non-profit organization that promotes the distribution of Creative Commons-licensed music. Their Dogmazic service is similar to the standard Jamendo service: it distributes music with a variety of licenses that permit gratis download and listening. The Association’s Pragmazic service sells CD-quality downloads of redistributable music. ccMixter is a website focused on music modification (that is, creation of digital works). It does not sell music, but rather provides music to the musician community with Creative Commons licenses that authorize remixing and distribution of derivatives. There has been very little research that has addressed distribution issues related to music that is licensed for distribution in ways that obviate piracy. One study that has possible implications on open content is Smith and Telang’s (2009) finding that free over-the-air movie broadcasts increased both legal sales of DVDs of the movie and illegal torrent downloads at the same time. This finding suggests that even in the presence of piracy, providing a one-time free (that is, gratis) alternative could increase revenue from sales. Although the one-time nature of a TV broadcast limits the applicability of this finding to open content, it does imply that a gratis distribution channel can simultaneously increase paid sales. We have been unable to find any studies as of yet that considers the effects on digital music distribution under licenses where redistribution is in fact legal. This proposed study breaks new theoretical ground in investigating an increasingly popular solution to the piracy problem: cases where the musician is willing to authorize redistribution of their work. As described below in the research methodology section, the study would investigate the effects of this distribution approach on the same outcomes of concern in the piracy literature: the extent of distribution of the music, consumers’ willingness to pay, and the total economic returns for the distributed work (which would be roughly a product of the extent of distribution and consumers’ willingness to pay). Open content in developing countries Although open source software arose in developed nations, as have virtually all information technologies, there has been much attention given to its promising benefits for the development of the domestic software industries of developing countries. A number of reports have been commissioned by governmental and non-governmental agencies on the general subject of information and communication technology (ICT) for development; many of these mention the role of open source software (Weerawarana and Weeratunga 2004; Giancarlo Nuti Stefanuto and Sergio Salles-Filho 2005; Bannerman 2007). The United Nations Conference on Trade and Development (UNCTAD) (2007), in their annual report on science and technology for development, described the value of open source software in building the native capacity for innovation within countries that foster this model of software development. However, they mainly focused on the open content extrapolation of the open source model to various applications of open innovation, where various enterprises in an industry collaborate with each other and with consumers to collectively develop business and engineering solutions to common problems and challenges. Thus, there is significant potential for the development of non-software open content to be beneficial for developing countries. This section reviews the nascent scholarly work in this area. Marsden (2007) reported on hacking of bicycles that he observed in Zambia and Malawi, modifying their factory-built functionality. Although he referred to this as “open source bicycles”, it is better considered as an application of the open content phenomenon to the development of bicycles. 30

However, it is unclear from his report what the legal status of such hacking might be. Informal, unauthorized hacking of purchased equipment is universally common (often called “modding”), such as in modifying video games and video game consoles to run new games or other software that was not originally intended (Wikipedia contributors 2008c). Doctor’s (Doctor 2007, 2008; 2008) studies on open source platforms for building digital repositories and digital libraries in India merged open source software with open content, since the software platform naturally used the same development philosophy as the information product that was developed upon it. While there is no technical necessity for the platform for open content development to be open source, it is natural that those who are accustomed to working within the flexible parameters of open content would want a software platform for development that is equally flexible. This is the case with Wikipedia, where an open source wiki software platform (MediaWiki) was custom-built to serve the needs of Wikipedia. Wikipedia is noteworthy as the world’s largest non-software open content project. Similar issues that face the use and development of OSS in developing countries face the use and development of Wikipedia by citizens of developing countries, particularly in the non-European language versions of Wikipedia. However, there has not yet been any scholarly journal article that has treated Wikipedia in developing countries. Devouard (2005) presented the Wikimedia Foundation’s goal of helping to build “a world in which every single person is given free access to the sum of all human knowledge”. The Foundation’s primary thrust in this area is in sponsoring and fostering the development of Wikipedia. However, she pointed out that due to low access of the Internet, low literacy, and low capability in European languages, a majority of world citizens, particularly in Africa, are practically excluded from this access. She laid out the Foundation’s plans for achieving access to this open content participation for Africans by various means “such as DVDs (free or for a small fee), USB key, OLPC (One Laptop per child) computers, books, etc”. (Ola Abd El-Tawab 2008). Other initiatives supported by interested third-parties include the production and distribution of offline versions of Wikipedia on CDs, particularly in non-English African languages (Geekcorps Mali 2008). The Wikimedia Foundation is presently engaged in an active campaign to raise international awareness of the benefits of promoting participation and access of its open content projects to citizens of developing countries (Wikia contributors 2008). Hopefully, as Wikipedia becomes more diffused in developing countries, it will attract scholarly attention to understand how open content in general can benefit developing countries.

Conclusion We have argued that it is important to frame the theoretical knowledge base that contributes understanding to the open content model, to evaluate the quality of its resulting products, and to evaluate the benefits of such a production model to its producers. The existing research on open source software has only considered its traditional role in software development. However, the same open source philosophy can be applied to the collaborative creation of non-software information products, such as encyclopedias, books, and dictionaries. In these areas, the open content development model might have much greater potential of significant societal impact because it is not restricted to the domain of highlytrained specialists, as in the case the software developers who contribute to open source software. On the contrary, any literate person with an Internet connection can contribute to Wikipedia. Whereas certain kinds of open content will always require special skills for contribution (such as open music and open video), such skills are much more widely dispersed among the general population than are software development skills, which promises much broader participation. Moreover, considering the increasing concern about piracy of digital goods, it is very important to investigate the effectiveness of open content licensing as a solution to content producers’ concerns. This study is the first to comprehensively study the application of open source software development principles to non-software digital products. We have sought to leverage the proven value 31

of the open source model to obtain deep, theory-based knowledge about the nascent open content applications. Further research on open content could offer significant contributions to researchers and to practitioners. For research, it would synthesize key theoretical findings from the extensive research on open source software to provide a framework for research on the nascent open content phenomenon, which has only recently started receiving serious scholarly attention (Cheliotis 2009). It would carefully identify the theoretical characteristics of Wikipedia which cause it to fail or succeed, as a basis for understanding other kinds of open content; more theory-focused researchers have already demonstrated an interest in theoretical research on Wikipedia (Kane and Fichman 2009). It would develop theoretical bases for different kinds of open content licenses, and explain their various market-related outcomes. For practice, such research would provide a critical lens for evaluating Wikipedia as an encyclopedia. It would distil key lessons from Wikipedia for the design of other kinds of open content. It would guide open content creators in their licensing decisions, based on their desired outcomes. It would provide a sound basis for deciding on open content and related licensing options as an approach to dealing with digital piracy. This article provides a justification for considering open content as a scholarly focus of study. Open content is important as a new direction in the availability of information products. Just as open source software has become such a dominant force in the software landscape, open content is increasingly becoming a dominant force in all the media it touches. Both subjectively-evaluated content, such as open music, poetry, fiction, and video, and objectively-evaluated content, such as maps, GPS navigation systems, and courseware, are gaining increasing importance. We encourage researchers to focus attention on this increasingly important domain of our information society, and to take advantage of the accessibility and relevance of the available data.

References Ågerfalk, Pär J and Brian Fitzgerald. 2008. Outsourcing to an unknown workforce: Exploring opensourcing as a global sourcing strategy. MIS Quarterly 32, 2, 385–409. Aksulu, Altay and Michael Wade. 2010. A Comprehensive Review and Synthesis of Open Source Research. Journal of the Association for Information Systems 11, 11. . Armstrong, Timothy K. 2010. Shrinking the Commons: Termination of Copyright Licenses and Transfers for the Benefit of the Public. Harvard Journal on Legislation 47, 2, 359–423. Arora, Ashish et al. 2008. An Empirical Analysis of Software Vendors’ Patch Release Behavior: Impact of Vulnerability Disclosure. Information Systems Research 21, 1, 115–132. Arter, Dennis. 2005. Web Watch. Quality Progress 38, 9, 22. Ballantyne, Peter. 2009. Accessing, Sharing and Communicating Agricultural Information for Development: emerging trends and issues. Information Development 25, 4, 260 –271. Bannerman, Sara. 2007. Intellectual Property Issues in ICT4D (Information and Communication Technologies for Development). Ottawa: International Development Research Centre. . Benlian, Alexander and Thomas Hess. 2010. Comparing the relative importance of evaluation criteria in proprietary and open-source enterprise application software selection – a conjoint study of ERP and Office systems. Information Systems Journal, no. 32

Bergquist, Magnus and Jan Ljungberg. 2001. The power of gifts: organizing social relationships in open source communities. Information Systems Journal 11, 4, 305–320. Brian Fitzgerald. 2006. The Transformation of Open Source Software. MIS Quarterly 30, 3, 587–98. Budoni, A. et al. 2007. Integration of webgis and open content environments for self-empowering egovernance. Urban and Regional Data Management UDMS 2007 Annual, 105. Cheliotis, Giorgos. 2009. From open source to open content: Organization, licensing and decision processes in open cultural production. Decision Support Systems 47, 3, 229–244. Chellappa, Ramnath K. and Shivendu Shivendu. 2005. Managing Piracy: Pricing and Sampling Strategies for Digital Experience Goods in Vertically Segmented Markets. Information Systems Research 16, 4, 400–417. Chiao, Hak Fung. 2010. Essays in information economics. Doctoral Dissertation. United States -Michigan: University of Michigan. Chou, Shih-Wei and Mong-Young He. 2010. The factors that affect the performance of open source software development – the perspective of social capital and expertise integration. Information Systems Journal, no. Churchill, Elizabeth F and Mark Vanderbeeken. 2008. Open, Closed, or Ajar? Content Access and interactions. Interactions 15, 5, 42. Couros, Alec Valintino. 2006. Examining the Open Movement: Possibilities and implications for education. Doctoral Dissertation. Canada: The University of Regina (Canada). Cyr, Dianne. 2008. Modeling Web site design across cultures: Relationships to trust, satisfaction, and eloyalty. Journal of Management Information Systems 24, 4, 47–72. Dede, Chris. 2008. A Seismic Shift in Epistemology. EDUCAUSE Review 43, 3, 80. Denning, Peter et al. 2005. Wikipedia risks. Association for Computing Machinery. Communications of the ACM 48, 12, 152. Doctor, Gayatri. 2007. Knowledge sharing: developing the digital repository of SIPS. VINE 37, 1, 64. Doctor, Gayatri. 2008. Capturing intellectual capital with an institutional repository at a business school in India. Library Hi Tech 26, 1, 110. Doctor, Gayatri and Smitha Ramachandran. 2008. DSpace@IBSA: knowledge sharing in a management institute. VINE 38, 1, 42. Doerr, J. et al. 2010. Pricing of Content Services–An Empirical Investigation of Music as a Service. Sustainable e-Business Management, 13–24. Dougan, Kirstin. 2010. Music to our Eyes: Google Books, Google Scholar, and the Open Content Alliance. portal: Libraries & the Academy 10, 1, 75–93.

33

Downes, S. 2007. Models for sustainable open educational resources. Interdisciplinary journal of knowledge and learning objects 3, 29–44. Eijkman, Henk. 2008. Web 2.0 as a non-foundational network-centric learning spacer. Campus - Wide Information Systems 25, 2, 93. Erik Möller. 2008. Definition of Free Cultural Works. Free Cultural Works Retrieved June 14, 2009. Fallis, Don. 2008. Toward an epistemology of Wikipedia. Journal of the American Society for Information Science and Technology 59, 10, 1662. Fang, Yulin and Derrick Neufeld. 2009. Understanding Sustained Participation in Open Source Software Projects. Journal of Management Information Systems 25, 4, 9–50. Feller, Joseph et al. 2008. From Peer Production to Productization: A Study of Socially Enabled Business Exchanges in Open Source Service Networks. Information Systems Research 19, 4, 475–493. Feller, Joseph and Brian Fitzgerald. 2000. A framework analysis of the open source software development paradigm. In 21st International Conference on Information Systems, ICIS ’00, 58– 69. Brisbane, Australia: Association for Information Systems. . Feller, Joseph and Brian Fitzgerald. 2001. Understanding Open Source Software Development. Addison-Wesley Professional. Fiedler, Tom. 2008. The Web’s Pathway to Accuracy. Nieman Reports 62, 2, 40. Florence Devouard. 2005. How to get Wikimedia into the developing world. In Proceedings of Wikimania 2005, Frankfurt: Wikimedia Foundation. . Gallivan, Michael J. 2001. Striking a balance between trust and control in a virtual organization: a content analysis of open source software case studies. Information Systems Journal 11, 4, 277– 304. Garcelon, Marc. 2009. An information commons? Creative Commons and public access to cultural creations. New Media & Society 11, 8, 1307 –1326. Garud, Raghu, Sanjay Jain and Philipp Tuertscher. 2008. Incomplete by Design and Designing for Incompleteness. Organization Studies 29, 3, 351. Geekcorps Mali. 2008. Wikimedia by moulin. Wikimedia by moulin Retrieved September 22, 2008. Giancarlo Nuti Stefanuto and Sergio Salles-Filho. 2005. Impact of the free software and open source on the software industry in Brazil. Brazil: SOFTEX/ UNICAMP/ MCT. . 34

Giles, Jim. 2005. Internet encyclopaedias go head to head. Nature 438, 7070, 900–901. Gorman, G.E. 2007. A tale of information ethics and encyclopædias; or, is Wikipedia just another internet scam? Online Information Review 31, 3, 273. Guimaraes, Tor, D. Sandy Staples and James McKeen. 2007. Assessing the impact from information systems quality. Quality Management Journal 14, 1, 30–44. Hahn, Jungpil, Jae Yun Moon and Chen Zhang. 2008. Emergence of New Project Teams from Open Source Software Developer Networks: Impact of Prior Collaboration Ties. Information Systems Research 19, 3, 369–391. Iyengar, R. 2010. Pricing for a Menu of Service Plans–An Application to the Digital Music Industry. J Ljungberg. 2000. Open source movements as a model for organising. European Journal of Information Systems 9, 4, 208. Jaisingh, Jeevan, Eric W. K. See-To and Kar Yan Tam. 2008. The Impact of Open Source Software on the Strategic Choices of Firms Developing Proprietary Software. Journal of Management Information Systems 25, 3, 241–275. Järvinen, Pertti. 2008. Mapping Research Questions to Research Methods. In D. Avison et al. (eds.), Advances in Information Systems Research, Education and Practice, IFIP International Federation for Information Processing, 29–41. . Jiang, Zhengrui and Sumit Sarkar. 2009. Speed Matters: The Role of Free Software Offer in Software Diffusion. Journal of Management Information Systems 26, 3, 207–239. Jin, Leigh, Daniel Robey and Marie-Claude Boudreau. 2007. Beyond Development. Information Resources Management Journal 20, 1, 68–80. Jørgensen, Niels. 2001. Putting it all in the trunk: incremental software development in the FreeBSD open source project. Information Systems Journal 11, 4, 321–336. Jøsang, Audun, Roslan Ismail and Colin Boyd. 2007. A survey of trust and reputation systems for online service provision. Decision Support Systems 43, 2, 618–644. Kahn, Beverly K., Diane M. Strong and Richard Y. Wang. 2002. Information quality benchmarks: product and service performance. Commun. ACM 45, 4, 184–192. Kane, Gerald C. and Robert G. Fichman. 2009. The Shoemaker’s Children: Using Wikis for Information Systems Teaching, Research, and Publication. MIS Quarterly 33, 1, 1–22. Koch, Stefan and Georg Schneider. 2002. Effort, co-operation and co-ordination in an open source software project: GNOME. Information Systems Journal 12, 1, 27–42. Korman, Ken. 2006. Exploring the digital universe. NetWorker 10, 1, 26. Krogh, Georg von and Eric von Hippel. 2006. The Promise of Research on Open Source Software. Management Science 52, 7, 975–983. 35

Lerner, Josh and Jean Tirole. 2001. The open source movement: Key research questions. European Economic Review 45, 4–6, 819–826. Lerner, Josh and Jean Tirole. 2005. The Economics of Technology Sharing: Open Source and Beyond. The Journal of Economic Perspectives 19, 2, 99–120. Liao, Han-Teng. 2006. Towards creative da-tong. International Journal of Cultural Studies 9, 3, 395 – 406. Lundell, Björn, Brian Lings and Edvin Lindqvist. 2010. Open source in Swedish companies: where are we? Information Systems Journal 20, 6, 519–535. Magnus, P. D. 2006. Epistemology and the Wikipedia. In North American Computing and Philosophy Conference Troy, New York, Marsden, Gary. 2007. Open source bicycles. Interactions 14, 1, 16. Matychak, Xanthe. 2008. Knowledge Architecture That Facilitates Trust and Collaboration. Interactions 15, 4, 26. McShane, Clay. 2008. Review Essay: “Mind of God”—Not. Journal of Urban History 35, 1, 178 –185. Meyers, A., D. Mccubbrey and R. Watson. 2008. Open content textbooks: Educating the next generation of bioentrepreneurs in developing economies. Journal of Commercial Biotechnology 14, 4, 277. Moe, W. W. 2009. How Much Does a Good Product Rating Help a Bad Product? Modeling the Role of Product Quality in the Relationship between Online Consumer Ratings and Sales. University of Maryland Working Paper. Mukhtar, R. and Z. Rosberg. 2003. A client side measurement scheme for request routing in virtual open content delivery networks. In Proc. of 22nd IEEE International Performance, Computing, and Communications Conference, Citeseer. Nelson, Matthew, Ravi Sen and Chandrasekar Subramaniam. 2006. Understanding Open Source Software: A Research Classification Framework. Communications of the Association for Information Systems 17, 1. . Niederman, Fred et al. 2006. A Research Agenda for Studying Open Source I: A Multi-Level Framework. Communications of the Association for Information Systems 18, 1. . Nwagwu, Williams E. and Allam Ahmed. 2009. Building open access in Africa. International Journal of Technology Management 45, 1/2, 82–101. Okoli, Chitu. 2009. A Brief Review of Studies of Wikipedia in Peer-Reviewed Journals. In Proceedings of the Third International Conference on Digital Society, 2009., 155–160. Cancun, Mexico. Okoli, Chitu et al. 2012. The people’s encyclopedia under the gaze of the sages: A systematic review of scholarly research on Wikipedia. SSRN eLibrary Retrieved September 26, 2012. 36

Okoli, Chitu and Kevin Carillo. 2005. Intellectual Property Rights in Open Source Software Communities. In S. Dasgupta (ed.), Encyclopedia of Virtual Communities and Technologies, 285–290. Hershey, USA: Idea Group Reference. Ola Abd El-Tawab. 2008. Wikipedia: Beyond the Free Encyclopedia. IslamOnline.net Retrieved September 22, 2008. Payne, Christian. 2002. On the security of open source software. Information Systems Journal 12, 1, 61– 78. Peitz, Martin and Patrick Waelbroeck. 2006. Piracy of digital products: A critical review of the theoretical literature. Information Economics and Policy 18, 4, 449–476. Perens, Bruce. 1999. The Open Source Definition. In Open Sources: Voices from the Open Source Revolution, 171–188. Retrieved June 14, 2009. Pfaffenberger, Bryan. 2001. Why Open Content Matters. Knowledge, Technology & Policy 14, 1, 93. Raghu, T. S. et al. 2009. Willingness to Pay in an Open Source Software Environment. Information Systems Research 20, 2, 218–236. Raymond, Eric. 2001. The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary. Sebastopol, CA: O’Reilly. Rossi, Maria Alessandra. 2004. Decoding the “Free/Open Source(F/OSS) Software Puzzle” a survey of theoretical and empirical contributions. Department of Economics, University of Siena. Retrieved May 1, 2012. Scacchi, Walt. 2007. Free/Open Source Software Development: Recent Research Results and Methods. In Architectural Issues, 243–295. Elsevier. Retrieved May 1, 2012. Schiltz, Michael, Frederik Truyen and Hans Coppens. 2007. Cutting the Trees of Knowledge: Social Software, Information Architecture and Their Epistemic Consequences. Thesis Eleven 89, 1, 94– 114. Schuwer, R. and F. Mulder. 2009. OpenER, a Dutch initiative in Open Educational Resources. Open Learning: The Journal of Open and Distance Learning 24, 1, 67–76. Schweik, Charles, Tom Evans and J. Morgan Grove. 2005. Open Source and Open Content: a Framework for Global Collaboration in Social-Ecological Research. Ecology & Society 10, 1, 1– 25. Schweik, Charles M. et al. 2009. Reflections of an Online Geographic Information Systems Course Based on Open Source Software. Social Science Computer Review 27, 1, 118 –129.

37

Sen, Ravi. 2007. A Strategic Analysis of Competition Between Open Source and Proprietary Software. Journal of Management Information Systems 24, 1, 233–257. Sen, Ravi, Chandrasekar Subramaniam and Matthew L. Nelson. 2008. Determinants of the Choice of Open Source Software License. Journal of Management Information Systems 25, 3, 207–239. Sharma, Srinarayan, Vijayan Sugumaran and Balaji Rajagopalan. 2002. A framework for creating hybrid-open source software communities. Information Systems Journal 12, 1, 7–25. Sinha, Rajiv K, Fernando S Machado and Collin Sellman. 2010. Don’t Think Twice, It’s All Right: Music Piracy and Pricing in a DRM-Free Environment. Journal of Marketing 74, 2, 40–54. Stallman, Richard M. 2002. Copyright and Globalization in the Age of Computer Networks. In Free Software, Free Society: selected essays of Richard M. Stallman, 133–154. Boston: Free Software Foundation. Retrieved September 27, 2010. Stallman, Richard M. 2007. Why “Open Source” misses the point of Free Software. Free Software Foundation Retrieved June 15, 2009. Stallman, Richard M. 2009. Free Software, Free Society: Selected Essays of Richard M. Stallman J. Gay (ed.),. CreateSpace. Stamelos, Ioannis et al. 2002. Code quality analysis in open source software development. Information Systems Journal 12, 1, 43–60. Stewart, Katherine and Sanjay Gosain. 2006. The Impact of Ideology on Effectiveness in Open Source Software Development Teams. MIS Quarterly 30, 2, 4. Stewart, Katherine J., Anthony P. Ammeter and Likoebe M. Maruping. 2006. Impacts of License Choice and Organizational Sponsorship on User Interest and Development Activity in Open Source Software Projects. Information Systems Research 17, 2, 126–144. Sundararajan, Arun. 2004. Managing Digital Piracy: Pricing and Protection. Information Systems Research 15, 3, 287–308. Svoboda, Elizabeth. 2006. One-Click Content, No Guarantees. IEEE Spectrum 43, 5, 64. Swartz, Nikki. 2007. Library of Congress to Digitize Brittle Books. Information Management Journal 41, 3, 12. Telang, Rahul and Michael Smith. 2009. Competing with Free: The Impact of Movie Broadcasts on DVD Sales and Internet Piracy. Management Information Systems Quarterly 33, 2, 321–338. UNCTAD. 2007. UNCTAD Information Economy Report 2007-2008: Science and technology for development: the new paradigm of ICT. Geneva: UNCTAD. . Ven, Kris et al. 2008. Stimulating information sharing, collaboration and learning in operations research with libOR. International Journal on Digital Libraries 8, 2, 79–90. 38

Vitari, C. and A. Ravarini. 2009. A longitudinal analysis of trajectory changes in the software industry: the case of the content management application segment. European Journal of Information Systems 18, 3, 249. Waters, Neil L. 2007. Why You Can’t Cite Wikipedia in My Class. Association for Computing Machinery. Communications of the ACM 50, 9, 15. Weerawarana, Sanjiva and J. Weeratunga. 2004. Open Source in Developing Countries. Stockholm: Sida. . Wikia contributors. 2008. Truth in Numbers: The Wikipedia Story. Wikia Retrieved September 22, 2008. Wikimedia contributors. 2009. Founding principles. Meta, a Wikimedia project coordination wiki Retrieved June 15, 2009. Wikipedia contributors. 2008a. Seigenthaler incident. Wikipedia, The Free Encyclopedia Retrieved August 25, 2008. Wikipedia contributors. 2008b. Epistemology. Wikipedia, The Free Encyclopedia Retrieved August 25, 2008. Wikipedia contributors. 2008c. Modding. Wikipedia, The Free Encyclopedia Retrieved September 23, 2008. Wikipedia contributors. 2009a. GNU Free Documentation License. Wikipedia, The Free Encyclopedia Retrieved June 14, 2009. Wikipedia contributors. 2009b. The Free Software Definition. Wikipedia, The Free Encyclopedia Retrieved June 14, 2009. Wiley, David and Seth Gurrell. 2009. A decade of development... Open Learning 24, 1, 11–21. Zhu, X. 2007. Extending the SCORM Specification for References to the Open Content Object. Educational Technology & Society 10, 1, 17.

39

Appendix: Genres of works covered by the open content definition, as specified in international intellectual property laws Berne Convention: Convention for the Protection of Literary and Artistic Works, 1971 Article 2 paragraph 1 The expression “literary and artistic works” shall include every production in the literary, scientific and artistic domain, whatever may be the mode or form of its expression, such as books, pamphlets and other writings; lectures, addresses, sermons and other works of the same nature; dramatic or dramaticomusical works; choreographic works and entertainments in dumb show; musical compositions with or without words; cinematographic works to which are assimilated works expressed by a process analogous to cinematography; works of drawing, painting, architecture, sculpture, engraving and lithography; photographic works to which are assimilated works expressed by a process analogous to photography; works of applied art; illustrations, maps, plans, sketches and threedimensional works relative to geography, topography, architecture or science. TRIPS: Agreement on Trade-Related Aspects of Intellectual Property Rights, 1994 Article 10. Computer Programs and Compilations of Data 1. Computer programs, whether in source or object code, shall be protected as literary works under the Berne Convention (1971). 2. Compilations of data or other material, whether in machine readable or other form, which by reason of the selection or arrangement of their contents constitute intellectual creations shall be protected as such. Such protection, which shall not extend to the data or material itself, shall be without prejudice to any copyright subsisting in the data or material itself. Section 4: Industrial Designs Article 25. Requirements for Protection 1. Members shall provide for the protection of independently created industrial designs that are new or original. Members may provide that designs are not new or original if they do not significantly differ from known designs or combinations of known design features. Members may provide that such protection shall not extend to designs dictated essentially by technical or functional considerations. 2. Each Member shall ensure that requirements for securing protection for textile designs, in particular in regard to any cost, examination or publication, do not unreasonably impair the opportunity to seek and obtain such protection. Members shall be free to meet this obligation through industrial design law or through copyright law. Section 6: Layout-Designs (Topographies) of Integrated Circuits Article 35. Relation to the IPIC Treaty Members agree to provide protection to the layout-designs (topographies) of integrated circuits (referred to in this Agreement as "layout-designs") in accordance with Articles 2 through 7 (other than paragraph 3 of Article 6), Article 12 and paragraph 3 of Article 16 of the Treaty on Intellectual Property in Respect of Integrated Circuits and, in addition, to comply with the following provisions.

40

Suggest Documents