Implementing a Reputation and Recommender System f - CiteSeerX

0 downloads 0 Views 2MB Size Report
Interview #1: Recommender System Questions. ...... To answer questions surrounding how a reputation and recommender ...... "A trust model of p2p system.
Expanding Learning and Social Interaction through Intelligent Systems Design: Implementing a Reputation and Recommender System for the Claremont Conversation Online

By

Brian Thoms A Dissertation submitted to the Faculty of Claremont Graduate University in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate Faculty of Information Systems and Technology

Claremont, California 2009

Approved by: ____________________________ Terry Ryan, PhD

Copyright by Brian Thoms 2009 All rights Reserved

We, the undersigned, certify that we have read this dissertation of Brian Thoms and approve it as adequate in scope and quality for the degree of Doctor of Philosophy.

Dissertation Committee:

___________________________ Terry Ryan, Chair

___________________________ Lorne Olfman, Member

___________________________ Samir Chatterjee, Member

Abstract of the Dissertation Expanding Learning and Social Interaction through Intelligent Systems Design: Implementing a Reputation and Recommender System for the Claremont Conversation Online By Brian Thoms Claremont Graduate University: 2009

In this dissertation I examine the design, construction and implementation of an online blog ratings and user recommender system for the Claremont Conversation Online (CCO). In line with constructivist learning models and practical information systems (IS) design, I implemented a blog ratings system (a system that can be extended to allow for rating wiki pages, profiles and files) to provide CCO users with the capability to rate the blog posts of their peers. I also implemented a user recommender system (a system that also can be extended to blogs, wiki pages, profiles, and files) to foster connection-making across the CCO. The system utilizes ratings from the ratings system to recommend likeminded individuals from across the site. While this research builds upon earlier research on the CCO, I expand it to explore how 1) a reputation system for blogs can facilitate learning in higher education and 2) a recommender system can increase user retention across the CCO. I measured these new components across five courses implementing the CCO. Results showed that while adoption rates for each system varied across courses, users adopting both the recommender and ratings systems experienced higher perceived

levels of learning, social interaction and community compared with those students choosing not to adopt either system. This population also reported higher levels of planned continued CCO usage compared against those not adopting either system.

Dedication

This dissertation is dedicated to my father, James Thoms, who passed away on October 24, 2008.

Acknowledgements

I moved to California from New York in 2005 for sun, surf and school. Soon after, Claremont quickly became my west coast home away from the beach. As I reach the end of my student life at CGU, I would like to thank all those who have supported me along this amazing journey including the School of Information Systems and Technology student body, faculty and administration. Specifically, I would like to thank Nathan Garrett for his support as a lab member and fellow collaborator on the Claremont Conversation Project. I would also like to thank Nathan Botts whom I have collaborated with on numerous side projects, many of which have helped pay my bills. I would also like to thank all past and present members of the SISAT Student Council (Mark Brite, Rita Clemons, Daniel Firpo, Nicole Lytle, Trudi Miller, and Srimanth Sanbaraju) for making SISAT such a vibrant community to learn and socialize. Most importantly, I would like to thank the members of my dissertation committee; Lorne Olfman for his wisdom and detailed feedback using his tablet computer, Samir Chatterjee for his guidance and expertise in design science research, and Terry Ryan for his invaluable mentorship and unending patience as my doctoral advisor. Last, but not least, I wish to thank my friends and family for all the support they have shown me. Special thanks go to those called for airport duty on too many occasions. Extra special thanks goes to my mom! Once again, thanks to everyone!

vii

Table of Contents CHAPTER ONE – INTRODUCTION ............................................................................... 1  Problem Statement .......................................................................................................... 1  Retention...................................................................................................................... 2  Learning ....................................................................................................................... 3  Social Interaction and Community .............................................................................. 3  Overview of Chapters...................................................................................................... 5  CHAPTER TWO - LITERATURE .................................................................................... 6  Online Social Networks .................................................................................................. 7  Communities of Practice at CGU .................................................................................... 8  Theoretical Models for CoPs......................................................................................... 10  Activity Theory (focus on a learner’s activities) ....................................................... 10  Constructivism (focus on a learner’s needs).............................................................. 12  Social Presence (focus on a learner’s environment) .................................................. 13  Online Spaces at CGU................................................................................................... 15  Supporting CoP Technologies ................................................................................... 17  Adoption of User-centric OLCs ................................................................................ 19  CHAPTER THREE – RESEARCH METHODOLOGY ................................................. 21  Design Science Research .............................................................................................. 21  Intelligent Systems Design ............................................................................................ 25  Existing CCO Landscape........................................................................................... 25  Enhancing CCO Blogging Capabilities: Allowing for rating blogs .......................... 29  viii

Enhancing OLC Connections: Recommender System .............................................. 32  CHAPTER FOUR – ARTIFACT CONSTRUCTION ..................................................... 39  Reputation System Design ............................................................................................ 39  Reputation System Design Choices........................................................................... 40  Recommender System Design ...................................................................................... 43  Collaborative-based Design Algorithm ..................................................................... 44  Recommender System Design Choices ..................................................................... 45  CHAPTER FIVE - RESEARCH DESIGN....................................................................... 48  Quasi-Experimental Research ....................................................................................... 48  Research Hypotheses..................................................................................................... 49  Retention.................................................................................................................... 49  Learning and Motivation ........................................................................................... 49  Social Interaction and Community ............................................................................ 50  Pretest and Posttest Analysis ......................................................................................... 50  Retention.................................................................................................................... 51  Learning and Motivation ........................................................................................... 53  Social Interaction and Community ............................................................................ 54  Community ................................................................................................................ 55  Content and Activity Analysis ...................................................................................... 56  Interview Data ............................................................................................................... 57  CHAPTER SIX - IMPLEMENTATION .......................................................................... 58  Software Pilot ............................................................................................................ 58  Population: Graduate Courses at CGU ...................................................................... 58  ix

Intervention: Instructor and Student Training ........................................................... 60  Intervention: Site Layout ........................................................................................... 61  CHAPTER SEVEN- RESULTS ....................................................................................... 62  Data Collection Timeline .............................................................................................. 62  Pretest Data ................................................................................................................... 63  CCO Ratings System (pretest) ................................................................................... 63  CCO Recommender System (pretest)........................................................................ 64  Site Statistics ................................................................................................................. 65  CCO User Activity .................................................................................................... 65  CCO User Created Content ....................................................................................... 67  CCO Ratings Specific Data ....................................................................................... 68  CCO Recommender Specific Data ............................................................................ 69  Posttest Data .................................................................................................................. 69  CCO Trends: 2008 versus 2006/2007 ....................................................................... 70  CCO Blogging ........................................................................................................... 74  CCO Ratings .............................................................................................................. 78  CCO Recommendations ............................................................................................ 89  CCO Usability ........................................................................................................... 96  Qualitative Posttest Results ......................................................................................... 101  Interview Data ............................................................................................................. 103  CCO Ratings System ............................................................................................... 104  CCO Recommender System .................................................................................... 105  CHAPTER EIGHT – HYPOTHESIS TESTING ........................................................... 106  x

Retention.................................................................................................................. 106  Learning and Motivation ......................................................................................... 109  Social Interaction and Community .......................................................................... 110  Community .............................................................................................................. 111  CHAPTER NINE – DISCUSSION AND NEXT STEPS .............................................. 113  Ratings System ............................................................................................................ 113  Ratings System: Social Interaction & Community.................................................. 114  Ratings System: Learning and Motivation .............................................................. 115  Ratings System: Placement ..................................................................................... 116  Ratings as Input for Recommendations ................................................................... 117  Ratings System Next Steps ......................................................................................... 118  Next Steps: Ratings System Placement ................................................................... 118  Next Steps: Enhanced Ratings System (with feedback capabilities) ...................... 118  Next Steps: Expanded Ratings System (rate other content) .................................... 119  Next Steps: Enhancing the Ratings System Algorithm ........................................... 119  Next Steps: Integration with Courses ...................................................................... 120  Next Steps: Measure Ratings System on Undergraduate Population ...................... 121  CCO Recommender System........................................................................................ 121  Recommender System: Retention ........................................................................... 122  Recommender System: Social Interaction and Community .................................... 122  Recommender System: Cold Start........................................................................... 123  Recommender System Next Steps .............................................................................. 123  Next Steps: Placement ............................................................................................. 123  xi

Next Steps: Expansion and Enhancements .............................................................. 124  CHAPTER TEN – LIMITATIONS AND FUTURE STUDY ....................................... 126  Limitations ............................................................................................................... 126  Future Study (ongoing)............................................................................................ 127  CHAPTER ELEVEN - CONCLUSION......................................................................... 129  REFERENCES ............................................................................................................... 131  INDEX OF CHARTS, FIGURES AND INSTRUMENTS ............................................ 146  Dissertation Timetable ................................................................................................ 146  Pretest .......................................................................................................................... 147  Pretest: Instrument Development ............................................................................ 147  Pretest: Demographic Data ...................................................................................... 148  Pretest: Online Community ..................................................................................... 149  Pretest: Blogging ..................................................................................................... 150  Pretest: Reputation System ...................................................................................... 150  Pretest: Recommender System ................................................................................ 151  Posttest ........................................................................................................................ 151  Posttest: Instrument Development ........................................................................... 151  Posttest: Demographic Data .................................................................................... 152  Posttest: Online Community .................................................................................... 152  Posttest: Blogging .................................................................................................... 153  Posttest: Reputation System .................................................................................... 154  Posttest: Recommender System............................................................................... 155  Posttest: Overall Satisfaction ................................................................................... 156  xii

Posttest: Qualitative Questions ................................................................................ 156  Posttest: Qualitative Interview Email ...................................................................... 156  Posttest: Qualitative Interview Questions................................................................ 157  Interview #1: Ratings System Questions ....................................................................... 157  Interview #1: Recommender System Questions ............................................................. 158  Interview #2: Ratings System Questions........................................................................ 159  Interview #2: Recommender System Questions ............................................................. 159 

xiii

Acronyms

All but dissertation

ABD

Information Systems

IS

Asynchronous JavaScript and XML

AJAX

Java script

JS

Action Research

AR

Online Learning Community

OLC

Claremont Conversation Online

CCO

Online Social Network

OSN

Claremont Graduate University

CGU

Pearson’s Product Movement Correlation Coefficient

PMCC

Course Management System

CMS

Peer-to-Peer

P2P

Community of Interest

CoI

Personal home page

PHP

Community of Practice

CoP

San Diego State University

SDSU

Computer Supported Collaborative Work

CSCW

School of Information Systems and Technology

SISAT

Cascading Style sheet

CSS

Social Learning Software Lab

[SL]²

Database

DB

Structured Query Language

SQL

Degrees of Freedom

df

Technology Acceptance Model

TAM

Design Science Research

DSR

Transdisciplinary Course

T-course

Human Computer Interaction

HCI

Virtual Learning Environment

VLE

Hypertext markup language

HTML

eXtensible markup language

XML

Institutional Review Board

IRB

xiv

CH HAPTER R ONE – INTROD DUCTIO ON P Problem S Statement Recen nt research examined e hoow a specificc online learrning comm munity (OLC)), the C Claremont Conversation C n Online (C CCO), facillitates know wledge sharring and foosters leearning at Claremont C Graduate University U (CGU). Whiile the CCO has achiieved m moderate leveels of successs, one of thee difficultiess, as identifieed in Thomss et al. (20077) and G Garrett et al (2007) has been continnued usage. While the software coontinues a stteady innflux of new w users each semester, thhere is a preccipitous dropp in active users u at the end of eaach semesterr. Figure 1 illustrates thiis sharp dropp-off in activve users (72%) from thee start of the 2008 sp pring term thhrough June. Figure e 1 – Active U  User Count (  (all users) 

400

Active Userss

300 200

377

53 35

324

291 181

100

107

0 Jan n‐08

Feb‐0 08

Mar‐08

Apr‐08

May‐08

Jun‐08

One possible p explanation for such a merrcurial popullation may be b the perceeption a an instituutional resouurce, similarr to our univversity’s couurse manageement of the CCO as syystem (CMS S), Sakai. Thhe CCO was designed too be more thaan a CMS annd it aimed to t act 1

as an alternative to Sakai. Yet it remains relatively untapped as a mechanism for expanded scholarly research, active learning and social interaction for CGU. This philosophy is grounded in Reed’s Locus, a law that states that the utility networks, particularly social networks, can scale exponentially with the size of the network (Reed, 1999). Consequently, if more users adopt the CCO, the more powerful the CCO becomes as a possible resource for research and learning. Retention To assess the effectiveness of the CCO, aspects of community building and social interaction were measured (Garrett et al. 2007; Thoms et al. 2007; Thoms et al. 2008). On items related to continued usage, results indicated that while users’ needs were satisfied, users perceived little, if any, value in continuing to use the software. The majority of respondents reported neutral (35%) or negative (33%) perceptions on items related to continued usage. Another interesting trend represented by Figure 1 and supported further by Figure 2 is the loss in active users from month to month. From January2008 to February 2008, the CCO saw a 7% reduction in total active users, but only a 0.1% loss in active users existed for users with more than 1 connection. Additionally, from February 2008 to March 2008 active users reduced by 9%, while active users with connections remained static. This trend continued from March 2008 to April 2008, where active users with more than one connection rose to 11%, while active users reduced to 6.8%. April 2008 to May 2008 witnessed the largest drop-off with a reduction of 61% of active users. However, active users with more than one connection reduced by only 7%. All together 2

the CCO experienced a 72% reduction in total active users from January 2008 to June 2008, while this numbers was only 27% for active users with more than one connection. These two findings emphasize the need for discovering innovative mechanisms to help retain users and move towards a more sustainable CCO. Learning Another measure of success for the CCO was how well the software aligned with course objectives and fostered areas of learning. Most respondents agreed that the CCO was aligned with course-related learning goals (60%) and that the course community fostered learning (82%). Further, the research determined how well the CCO stimulated and motivated members. The majority of respondents agreed that making their work accessible increased their motivation to perform quality work (74%). Additionally, over half of the respondents felt that seeing their peers’ work helped with their own work (54%). However, many respondents were neutral (28%) or disagreed (21%) that the CCO helped them reflect on their class progress. Many respondents also were neutral (26%) or disagreed (20%) that their classmates’ work helped improve their own writing. These two outcomes offer further opportunities to develop a more stimulating CCO that can better engage students to reflect on their own work as well as the work of their peers. Social Interaction and Community Finally, the above noted CCO research discovered that a high level of social presence existed in courses using the CCO. Based on measurements adapted from Gunawardena and Zittle (1997) and Richardson and Swan (2003), respondents agreed that the CCO was an excellent medium for social interaction (85%). Most respondents 3

were also comfortable conversing through the OLC (86%). Overall, respondents felt comfortable participating in course discussions (91%) and perceived a strong sense of community in their courses (74%). However, while these results were strong, many individuals are not taking advantage of the social networking capabilities of the software. Out of 757 CCO users, only 115 (7%) have formed more than one connection since the CCO’s debut in September 2006. Out of these 115 users, the average number of connections is 4.9. Figure 2 illustrates users with a network of more than one user and retention over the spring semester. Of the 115 users with more than one connection, 80 (70%) were active in January 2008 and 63 were still active in June 2008 (55%). These numbers provide further incentive to develop technology to foster greater social connectivity across the CCO. Figure 2 identifies the number of users across the CCO with more than one peerconnection. In January active users with more than one connection represented 21% of total active users. In February this number rose to 22% as the number of active users declined. In March this number rose to 24% and in April to 25%. By June, the number of active users with more than one peer connection represented 59% of the total CCO users.

4

Figure 2 2 – Active User Count (ffriends > 1)  

Active …

80 60 40

80 0

79

79

74

69

3 63

20 0 Jan‐0 08

Feb‐08

Mar‐08

Apr‐08

Maay‐08

Jun‐0 08

In this dissertatiion I reporrt on a design researcch project that builds and mplements tw wo new CCO componennts, and evaluated the efffects on claasses utilizinng the im C CCO. The components thhat were desiigned and evvaluated werre: 1) A reputation system s provviding indiviiduals with the t ability to rate blog posts across the CCO, and 2) A collaborativve-based useer recommennder system m based on blog b ratings from the reputattion system.

O Overview o Chapterrs of This dissertation d contains eleeven chapterrs. Chapter One introduuces the reseearch prroject. Chap pter Two proovides a sum mmary of the current literrature on soccial networkks and coommunities of practice and presennts a theoretical model used to guuide the reseearch design. Chap pter Three discusses thhe Design Science Reesearch (DSR) methodoology appplied in th his dissertatiion. Chapterr Four details the consstruction of the IT artifacts. C Chapter Five discusses thhe quasi-expperimental deesign used too measure aspects a of booth IT 5

artifacts. Chapter Six discusses the implementation process across courses at CGU. Chapter Seven provides a breakdown of survey data and interview data. Chapter Eight details the hypotheses testing. Chapter Nine is the discussion and next steps. Chapter Ten details limitations and future work. Chapter Eleven is the conclusion.

6

CHAPTER TWO - LITERATURE Online Social Networks Milgram’s (1967) small-world phenomenon asserts that everyone in the world can be reached through a small chain of social ties. From this concept the more familiar phrase, six degrees of separation is derived. Today, we bear witness to the next wave of online social networks (OSNs) that attempt to harness the power of social connections to foster even greater interaction. While underlying business models look to the power of advertising money as grounds for building sustainable OSNs, philosophical underpinnings of OSNs focus on the power of “We” for collaborating on general interests and/or building solutions to complex problems. These networks make use of the latest in Internet technologies to provide users with an interactive multimedia environment. Consequently, many individuals are hooked on social networks, with some reports estimating that 37% of all online U.S. adults and 70% of all U.S. teens engage in some form of online social networking every month (Social Network Marketing, 2007). The number of social networks across the Internet is also growing. Mashtable.com (2009) identifies over 350 popular online social networking websites that maintain active users and are open to new users. While there are more dominant players such as MySpace and Facebook, niche social networks are creating content specific environments for individuals to collaborate and build community. Research in OSNs has attempted to understand how and why some OSNs grow while others do not. Some research areas concentrate on the structure of the social networks themselves. Through sophisticated social network analyses, simulations attempt 7

to pinpoint required levels of usage for the self-sustainability of the OSN, also known as a critical mass of users (Otte and Rousseau, 2002; Garton et al., 1997). Other research, including this dissertation, focuses on more design-oriented components of an OSN such as underlying technologies and/or interface design.

Communities of Practice at CGU Etched on the perimeter wall of our university campus there is the phrase, “The center of a college is in great conversation and out of the talk of college life springs everything else.” This notion of conversation extends beyond face-to-face interaction to include all aspects of life within higher education including course discussions, campus speakers, symposia and academic conferences. Participants in higher levels of higher education (i.e., graduate students) aspire to become participating members of their academic communities where they can discover and share knowledge with their academic peers. These academic communities are a subset of what Lave and Wenger (1991) have coined communities of practice (CoP). In such communities, individuals work together towards common goals, collaborating on common problems, sharing best practices, supporting one another and sharing a common identity. Successful CoPs sustain engagement and collaboration among individuals whereby knowledge sharing becomes an intrinsic function of the CoP (Adams and Freeman, 2000). CoPs are increasingly being used in the diffusion of knowledge by streamlining workflow and sustaining intellectual capital within and across organizational boundaries (Mason and Lefrere, 2003). In all types of knowledge sharing activities, a community 8

member engages in conversations, experimentations, and experiences with other members sharing similar objectives (Pan and Leidner, 2003). In a CoP, knowledge sharing activities involve individuals using the CoP as a mechanism for effectively conveying what they know (Hendriks, 1999; Usoro and Sharratt, 2003). In an online CoP, participating members share many benefits. In an extensive review paper on online communities, Iriberri and Leroy (2008) identity such shared benefits including common information exchange, social support, social interaction, schedule flexibility and data permanency. For the construction of our university’s online CoP, the developers, including this author, consider eight principles identified by Daniel et al. (2003) in classifying membership in a CoP. Thus, membership in the CCO online community included: •

shared sets of interests,



individual autonomy in setting the goals of the community,



a common identity,



awareness of the social protocols and goals of the community,



the ability to share information and knowledge effectively,



awareness that each is a member of the community,



voluntary participation, and



effective means of communication.

9

Theooretical Moodels for CoPs C Figuree 3 shows a theoreticcal model that t incorpoorates the various v elem ments (ooutlined abo ove). In todaay’s classrooom, activity based b learniing, represennted in the model m byy activity th heory, is com mmon wheree students annd faculty coombine the use u of technoology too accomplish h course tasks (Hiltz annd Turoff, 20005; Karamppiperis and Sampson, 2005). T These techno ologies, in turn, t (1) acccommodate for the uniique learninng styles of each inndividual, reepresented inn the model by construcctivism and (2) facilitatee levels of social s innteraction an nd building community, c representedd in our moddel by sociall presence th heory (T Tu and McIssaac 2002). Figu ure 3 – Theoretical Model for Online CoPs in Ed ducation 

A Activity Theoory (focus on o a learnerr’s activitiess) Whilee we shouldd not considder technoloogy as the driving d forcee in all types of leearning, tech hnology playys a central role in an online o CoP. Therefore, in designinng the C CCO (our un niversity’s onnline CoP) we w must firstt consideredd how individuals and grroups 100

would interact with these technologies. From its origins, activity theory considers human actions to be directed at objects and mediated by artifacts (Vygotsky, 1987). More simply put, an activity is the way a subject (either an individual or a group) moves towards an object with the purpose of attaining certain results or certain objectives (Neto et al., 2005). Activity theory also considers aspects of motivation and engagement. In activity theory, activities are goal-directed, where multiple ways exist to achieve those goals, oftentimes through adaptive means (Bødker, 1989). In educational environments, when instructors can choose activities from both online and face-to-face mediums, they can also select the activity that provides the best fit for any particular learning objective (Mor et al., 2005; Heckman and Annabi, 2006). Engagement theory also considers activitybased learning and is often associated with activity theory. Engagement theory asserts that students must be meaningfully engaged in learning activities through interaction with others and worthwhile tasks, facilitated and enabled by technology (Kearsley and Schneiderman, 1999). Activity theory provides a useful guideline for evaluating human-computer interaction in a field setting, such as an online environment (Kuutti, 1995). It can be used as a lens for understanding the sociotechnical interactive networks as a function of technology, community and individual interaction between the two. For my dissertation, activity theory can be used to predict how individuals will manipulate specific technologies to accomplish certain course-based tasks and goals. In existing online CoPs, activities include working with discussion boards, messaging, blogs and collaborative writing tools in order to share knowledge and build course community. 11

When studying motivations behind blogging, Nardi et al. (2004) applied activity theory to understand how blogs were used to communicate specific social purposes to others. In a study on higher education, Issroff and Scanlon (2001) found that activity theory dictates that multiple factors exist that can impact the usage of any one specific technology. In this research, I use activity theory to guide how individuals manipulate different Web 2.0 technologies to accomplish specific goal-oriented tasks to meet the specific learning needs as per constructivism and also aid in building a sustainable online community of users as per social presence theory. Activity theory provides a lens for understanding these interactions since it focuses on the activities in which individuals take part, whether it is in the construction of an individual’s portfolio or the development of a group wiki. Constructivism (focus on a learner’s needs) In an educational setting, activities must account for the individual learner’s needs. And in many cases, a learner will manipulate different technologies differently. In certain situations, a technology must meet the learning needs of individuals and be flexible in adapting to those needs. Prior research has traced the roots of a CoP to constructivism (Johnson, 2001; Palloff and Pratt, 1999; Savery and Duffy, 1996). Constructivism has largely been attributed to the work of Piaget (1952), who first theorized that learning can be based on the interaction and experiences of the learner within a specific context. Consequently, individuals develop knowledge and understanding through forming and continually refining concepts. There has been much research extending Piaget’s work. Hagstrom and 12

Wertsch (2004) state that constructivism encourages, utilizes, and rewards the unique and multidimensional characteristics of the individual throughout the learning process. Additionally, Squires (1999) states that constructivism focuses on learner control, with learners making decisions that match their own cognitive state and their own needs. While constructivism began as a theory of learning, it has progressively been used as a theory of education, of the origin of ideas, and of both personal knowledge and scientific knowledge (Matthews, 2002). Dalsgaard (2006) argues that social software can be used to support a constructivist approach to online learning. Social software can refer to any loosely connected application where individuals can communicate with one another, and track discussions across the Internet (Teeper, 2003). The development of any online learning tool should consider the learners’ point of view across these discussions (Soloway et al. 1996), providing them with a certain and needed level of control (Squires, 1999). Most social software can support these notions, providing users with self-governing and individually motivated activities. Thus, as we constructed the CCO we also considered how each component of the software could support this constructivist approach to learning. Social Presence (focus on a learner’s environment) A number of theories look at the role people play in an online CoP, including connectivism, social constructivism, behaviorism, social learning, situated learning and social presence. These theories focus on how individuals learn in groups, interact and collaborate with other members of the environment. Social presence theory asserts that individuals are influenced to a great extent by the surrounding members of a community. 13

Social presence theory also considers the degree to which an individual’s perception of an online community, in its entirety, affects his or her participation in that community (Short, Williams and Christie, 1976). In other words, social presence refers to a communicator’s sense of awareness of the presence of an interaction partner. Within human-computer interaction (HCI), social presence theory considers how ‘sense of community’ is shaped and affected by technological interactions (Biocca et al., 2003). Tu and McIsaac (2002) redefine social presence theory for computer mediated communication stating that it is the degree of feeling, perception, and reaction to another intellectual entity within a computer mediated environment. Levels of social presence can be a critical factor affecting the quality of social interaction within a group, and can also influence the dynamic of the group (Hung, 2003). Existing research indicates that high levels of social presence play a significant role in improving instructional effectiveness and building a sense of online community (Gunawardena and Zittle, 1997). Richardson and Swan (2003) and Shih and Swan (2005) discovered that students’ perception of social presence in online courses was significantly related to overall satisfaction with the course, perceived learning, and instructor satisfaction. When measuring social presence in an online professional development class, Wise et al. (2004) concluded that high social presence is thought to create an approachable environment and hence more satisfying learning experience and greater learning. Delicious is a social bookmarking web service for storing, sharing, and discovering web bookmarks. In measuring annotations made by users of Delicious, Lee (2006) discovered that individuals were more likely to include annotations (or more

14

helpful information) with their bookmarks, if they interacted with other individuals more frequently. When individuals perceive others within an online CoP to be real, they can begin building trust in the community and also start to view the online community as a valid source of knowledge building and/or social interaction. Thus, when implementing the CCO, it was critical to consider the composition of community, including understanding that the community itself becomes a unique entity. Furthermore, an online community cannot thrive without a palpable sense of social presence. This research looks to transfer the existing social presence obtained within a classroom and campus setting into an online experience.

Online Spaces at CGU A theoretical model, as detailed in the previous section, is an important guide for understanding how users learn, collaborate and interact in online communities. The model can also be used to guide how users participate in different types of online spaces. Represented in Figure 4, recent work by the Social Learning Software Lab ([SL]²), and as detailed in Garrett (2009), identified three aspects of online spaces for higher education, broken down as follows: 1) A personal space, including a sense of ownership and control over online content. 2) A class space, which is characterized by collaboration and relationships among the members of classes.

15

3) A communityy space, whhere individduals can connect to people p outsiide a particular course. Figure 4  4 ­ Learning  Spaces (Garrrett, 2009)) 

Each of these spaaces requires its own sppecific needds. For exam mple, in the CGU personal spacce, students may m be interrested in creaating an onliine portfolioo of work to track thheir expeditiion through a Masters or o PhD proggram. Or, sttudents mayy be interestted in crreating an online galleery of imaages from a trip to a recent connference or trip. A Alternatively , for the claass space, individuals i m congreegate on a centralized may c t topic, coollaborate on o a final grroup project and/or refleect on speciific course material, m succh as w weekly reading assignments or guesst lectures. Finally, thee communityy space is where w inndividuals begin b particcipating in the larger community.. In the coommunity space, inndividuals break away from f the connfines of theeir personal and class sppaces to disccover annd participaate in largeer communiity spaces, or what Fischer (2001) identifiees as, coommunities of interest. These comm munities cann be ad hocc and short term t for proojects w with specificc timelines and end-goaals, or theyy can be peermanent enntities. A stuudent

166

council would be a good example of a permanent entity, where no specific end-dates or end-goals exist. While the underlying technologies that support each of these spaces can be the same, the motivations behind each, in combination with the different actions individuals can take, can often vary greatly from space to space, as explained by Activity Theory. Supporting CoP Technologies With the clear vision of providing our university with an online space for building community and fostering learning, in 2006 our research lab was faced with the task of identifying what type of online platform would be used to build CGU’s online CoP. One specific type of online platform, used across 96% of learning institutions, is course management system (CMS) software (Educational Marketer, 2003). CMS software is designed for the facilitation and management of academic course work. Our university’s preferred CMS platform, Sakai, provided us with the opportunity to work with an already implemented and supported software platform with which to develop our online community. However, although Sakai could provide a powerful tool for instructors, we felt it would fall short in providing students with a learner-centric environment. As a largely top-down system, course facilitators control the flow and ownership of information in Sakai. Additionally, course communities in Sakai close at the end of a term leaving no persistent artifact students can take with them or visit. Sakai also limits the amount of peer-to-peer conversation and profile development; elements that can be essential in sharing knowledge and building community. As a result, CMS software, like

17

Sakai, can be thought of as an institutional tool that hampers students from controlling the visibility, organization and/or presentation of their online content. Residing on the opposite end of the ‘CoP control spectrum’ are online learning communities (OLC). Where CMS software is top-down, OLCs can be viewed as bottomup, where individuals own the content they create. Furthermore, it is the individual or community that decides what work becomes visible to whom. In providing the user with this control, OLCs can be characterized more closely as traditional CoPs, where the success of the OLC is directly correlated to the participation of its users. In an academic setting, although individuals may be graded on their contributions, the overall community can also be assessed for its ability to create a sustainable knowledge community. Table 1 illustrates this bifurcation in detail. Table 1 ­ CMS versus OLC 

Bifurcation in online CoPs Course Management System

Online Learning Community*

• Top down (instructor-driven)

• Bottom up (student-driven)

• Teacher owns courses and

• Students take ownership of much of

structure

the content

• Students react to what is required

• Students create and respond to others

• Content echoes instructor’s voice

• Students create their own unique

• Binds knowledge to course

voice • Fosters knowledge through search

objectives • Binds participation to course members

and gather techniques • Allows and encourages external members to join in

*Indicates the preferred solution 18

Adoption of User-centric OLCs Studies in online collaboration have shown that virtual communication patterns correspond in some fashion to real-life communication (Redfern and Naughton, 2002; Rhode et al., 2004). Consequently, online communities offer an alternative form of learning with different forms of interaction, and a new way of promoting community (Quan-Haase, 2004). As in face-to-face communication, members of an OLC should be able to state what they think, comment on what others have said, collaborate on common statements, and share information in many forms. Inspired by exemplars in online community and conversation—including MySpace, LinkedIn, and Facebook—our lab focused our efforts on the social networking model and many Web 2.0 technologies they incorporate. Stacey (2002) found that a higher quality of electronic communication helps to engage students and aids in their learning of the course material. Web 2.0 technologies, such as blogs and wikis along with peer-to-peer networking and file sharing, empower individuals to take ownership of their content while also making it easier to pursue social or scholastic ties with their peers. And increasingly, more individuals are gaining access and familiarizing themselves with these technologies, thus making their introduction into the classroom more-or-less seamless. Research trends support these assumptions. Brescia and Miller (2006) found benefits to using blogging in the classroom including enhanced student reflection, increased student engagement, portfolio building, and better synthesis across multiple activities. Rollett et al. (2007) discovered that wikis were well suited for team activities providing individuals with the ability to easily exchange, integrate and develop information through asynchronous means. 19

With such tools becoming critical in advanced learning environments, so is the market for developing out-of-the-box solutions that look to incorporate all-in-one Web 2.0 components. Discussed in detail in the next chapter, this dissertation adopts a design science research (DSR) approach to the construction of an OLC that offers greater potential for learning and community building.

20

CHAPTER THREE – RESEARCH METHODOLOGY Design Science Research The transition towards more student-centered learning tools is occurring in new releases of Sakai and Blackboard. These tools are now beginning to offer features such as blogging, profiles and connection-making. However, rather than await the future adoption of this software by our institution, [SL]² was committed to using rigorous research to design, build and evaluate such innovations among willing participants at our university. Additionally, there would be no guarantee that the needs of our university, a small private graduate university would be met by software that caters, predominantly, to larger institutions. The Claremont Conversation Online (CCO) Project was launched in spring 2006 with the objective of creating an OLC for CGU. The CCO Project has followed an Action Design Research methodology, which incorporates aspects of both Design Science and Action Research. Action Design provides a model for cross-fertilizing action research (meeting the needs of our university) with design research (devising information technology [IT] artifacts). Illustrated in Figure 5, Action Design provides a degree of overlap when intervening in an organizational setting rather than designing and implementing an artifact before or after the fact (Cole et al., 2005). This cross-fertilizing of action and design was critical in establishing the CCO. The action part of our research largely focused on regular meetings with university stakeholders to get sponsorship and commitment from course instructors. The design aspects focused primarily on building and piloting specific social learning technologies. 21

Fiigure 5 ­ Acttion Design  Cycle 

Whilee the larger CCO projecct is Action Design, thiss dissertationn maintains roots w Design Science Ressearch (DSR R). In this disssertation, I implemented i d and sppecifically with teested new teechnologies that t can furtther foster leearning and community across the CCO. C Specifically, this researchh focuses onn the designn and implem mentation of o 1) a reputtation syystem and 2) 2 recommennder system m to support learning and social inteeraction. Taable 2 iddentifies the criteria for DSR D and how my researrch meets theese objectivees.  

222

Table 2 – The CCO as Design Science Research (Hevner et al., 2004) 

Guideline 

Criterion 

Application to this Research 

Design as an 

DSR must produce a viable 

• Artifact 1: CCO reputation system 

Artifact 

artifact in the form of a 

plug‐in. 

construct, a model, a 

• Artifact 2: CCO recommender 

method, or an instantiation. 

system plug‐in. 

Problem 

The objective of DSR is to 

The artifact will address these 

Relevance 

develop technology‐based 

observed problems with the CCO: 

solutions to important and 

• Low numbers of user connections 

relevant business 

made across the CCO.  

problems. 

• Low numbers of  feedback/comments generated  by existing populations on blog  content.  • Large drop‐off of users after each  semester. 

Design  Evaluation 

The utility, quality, and 

• Artifacts will be pilot tested. 

efficacy of a design artifact 

Utility, quality and efficacy will be 

must be rigorously 

measured through survey 

demonstrated via well 

research and content analysis. 

executed evaluation  methods.  Research  Contribution 

Effective DSR must provide  Contributions will come from the  clear and verifiable 

evaluation of my design on:  

contributions in the areas 

• retention, 

of the design artifact, 

• learning, 

design foundations, and/or 

• social interaction and 

design methodologies. 

community.  23

Guideline 

Criterion 

Research Rigor  DSR relies upon the 

Application to this Research  • Artifacts follow proposed 

application of rigorous 

theoretical model for learning and 

methods in both the 

community. 

construction and 

• Survey instruments based on 

evaluation of the design 

existing research on learning and 

artifact. 

community.  • Instruments validated by  members of the [SL]².  

Design as a 

The search for an 

• Project began with a search to 

Search Process  effective artifact requires 

discover new ways to enhance 

utilizing available means  to reach desired ends 

community and social interaction.  • Understanding how recommender 

while satisfying laws in 

and reputation systems are 

the problem 

implemented is critical for 

environment. 

determining if they can be  successfully implemented into the  CCO.  • Existing design patterns used in  construction of both artifacts. 

Communi‐

DSR must be presented 

• The outlets for this research will 

cation of 

effectively both to 

be in the form of journal articles, 

Research 

technology‐oriented as 

conference proceedings and the 

well as management‐

dissertation itself.  

oriented audiences. 

• Communication of my findings will  also be provided to CCO  stakeholders, including course  instructors, students and various  CGU administrators.  24

Intelligent Systems Design In Design Science Research (DSR), the researcher is concerned with the way things ought to be in order to attain goals, and in order to achieve such goals the researcher devises artifacts (Simon, 1996). Furthermore, these IT artifacts are intended to solve identified organizational problems (Hevner et al., 2004; Walls et al., 1992). An artifact, as detailed by Benbasat and Zmud (2003), is any hardware/software design encapsulating structures, routines, norms, and values implicit in the rich contexts within which the artifact is embedded. In this research the design and integration of a recommender and reputation system into the CCO will serve as the IT artifact. More specifically, the DSR methodology will help answer two research questions: 1. Can a reputation system foster learning, motivation and / or social interaction? 2. Can a recommender system foster social interaction, community and / or retention? Existing CCO Landscape In 2006, [SL]² evaluated a variety of proprietary and open source social software. Based on cost, usability, extensibility, customizability and the range of features each offered. We decided on Elgg, a relatively nascent tool at the time, for its range of social features and easy-to-use interface. It should also be noted that since 2006, Elgg has received much support from the open source community and launched Version 1.0 on August 18, 2008.

25

Available through SourceForge.com, Elgg comes bundled with blogging, file sharing, abilities to create unlimited sub-communities, and peer-to-peer (P2P) networking capabilities. Additionally, Elgg provides for the ability to restrict access to data across a number of levels, including individual-level, community-level, logged in user-level and also custom levels. Figure 6 provides a snapshot of the [SL]² research community. The software is designed so that users have access to their own suite of Web 2.0 tools (illustrated by the top-most menu of Figure 6) distinct from each community’s suite of Web 2.0 tools (illustrated by the left-side menu of Figure 6). Figure 6 – Elgg Community 

User Menu

Community Menu

CCO User Population From 2006 through 2008, the CCO was used predominantly by two distinct groups at CGU. The first group consisted of students enrolled in courses in The School of 26

Information Systems and Technology (SISAT) and represented 66% of total CCO users. The second group consisted of students taking transdisciplinary courses (t-courses) and represented 34% of total CCO users. Over a two-year period, data was collected from 118 students enrolled in t-courses resulting in 84 survey responses.

CCO Blogging Capabilities Blogging was the most widely used CCO tool with 714 blog entries created across 118 users (or 6.1 per user) 1. One possible reason for the successful adoption of the blog was our population’s familiarity with blogs. As shown in Figure 7, 80% of respondents knew what a blog was prior to using the CCO. Figure 7 ­ Technology Familiarity (n=84)  Very Familiar Familiar

Technology

File Sharing

Not Familiar Wikis

Blogging Online Social Networks 0

10

20

30

40

50

60

% Familiar   Outside of general familiarity, there are additional reasons for the widespread adoption of blogging. Due to their journalistic styling, blogs are one possible way for students to transmit individual course assignments. The reverse is also true where blogs become a means for instructors to communicate information back to the students. A 1

For a complete research of findings see Garrett et al. (2007) Thoms et al. (2007) (2008) (2009).

27

content analysis identified each course utilizing the community blog as a space for class discussion on guest lectures, reading assignments and overall class discussion. Figure 8 provides a screenshot of the CCO blogging interface. Figure 8 ­ Elgg Blogging 

While blogging was common in each course, blogging frequencies varied. Some courses relied predominantly on blogs and blog comments as the primary method for assignments, while other t-courses used a combination of blogging and wiki-writing. Qualitative feedback provided greater insight into how effective blogging functionality was. One student stated, “I can express myself through blogs; as a result I can share my opinion and understand myself more. I also [reviewed] what I [learned] from class through [the] blog.” And another individual voiced, “[The community blog] enabled our group to work away from campus, which was good for basic group communication.” Other responses, though not directly stating the blog functionality, also give insight into what aspects of the software were beneficial. One individual stated, “My 28

favorite part was that I could see other people's work, which allowed me to learn from them.” Another student noted, “I think it's beneficial to share knowledge and opinions with classmates and the [software] helped support these sharing activities.” Enhancing CCO Blogging Capabilities: Allowing for rating blogs While blogging provides a beneficial method for individual contribution into the course community, feedback on how others perceive contributions may offer a more interactive blogging experience. This feedback can be in the form of blog comments, inclass comments or other methods. However, from a content analysis it was discovered that few comments if any were made for each blog post. On average, less than one comment existed per blog post. One reason for this may be the time needed to construct a valuable comment. Sometimes users just want to let the blogger know that they have read someone’s blog entry and whether or not they thought it was interesting, or not. However, with no ‘quick’ mechanism to do so, many users will opt not to undertake the tedious 4-step process of submitting a comment that involves 1) clicking the blog entry, 2) typing in the comment in the comment box, 3) clicking the Submit button, and 4) awaiting acceptance of the comment by the user. This problem of passive participation is not new nor is it isolated to the CCO. In some online communities, the number of passive participants can be as high as 84% (Nonnecke and Preece, 2000). This passive form of participation is identified by social software literature as lurking. Lurkers are those individuals who read but do not contribute in OLCs. And while research has suggested that not all lurking is negative, since lurkers may spread knowledge across other OLCs (Soroka and Rafaeli, 2002; 29

Takahashi et al., 2006), some form of participation from these silent parties is still largely sought. Maybe it is the case that users want to provide quick and easy feedback to let a blogger know that they found a post interesting or relevant. Kurhila et al. (2003) extended the features of a blog by implementing a document map feature using colors and brightness that provide users with the total time a document has been viewed, using dots dynamically displayed next to a document to depict the users who are currently viewing it. As a means to foster collaboration, the tool helped students discover group partners, by placing “triggers” that alert them when another student, even a lurker, shows an interest in a certain topic. Another possible way to elicit greater participation and help foster a community of de-lurkers (i.e., active participants), is through a rating system. Rating systems are a quick and easy way for users to leave an opinion or evaluation about an object, person, place or thing (Yahoo! Developer Network, Rating Systems, 2008). Consequently, these ratings can be used to help individuals participating in a social network to acquire a personalized reputation and help develop reputations of others (Yahoo! Developer Network, Reputation Systems, 2008). Ecommerce websites such as eBay, Yahoo! Auction and Amazon, products are rated by consumers, thus adding to the collective knowledge-base for that product (Resnick et al., 2000; Sazbater and Sierra, 2001; Mengshu et al., 2005). In ratings systems, individuals are often presented with simple 1 through x star rating schemas, where more stars indicates higher degrees of satisfaction or interest as perceived by a consumer. A ratings system would also provide the necessary data for reputation building. Prior research has discovered that building a reputation is often used by individuals as a 30

mechanism to achieve status within a collective and can be a strong motivator for active participation (Donath, 1999; Jones et al. 1997). Additionally, Wasko and Faraj (2005) discovered that a significant predictor of individual knowledge contribution is the perception that participation enhances one’s professional reputation.

Reputation Systems used in Education The above discussion references some technological aspects of ratings and reasons why they are used in ecommerce. However, while my proposed implementation of a ratings system across educational blogging is unique, the notion of peer rating systems in education is far from novel. Many instructors use peer ratings as a mechanism to receive feedback on collaborative course projects. In education, where assessment is tantamount to instructors grading student material, collaborative ratings systems may be considered largely suspect. However, for an OLC, it is possible that a reputation system can offer a new dimension to learning and social interaction, and thus should be incorporated into the CCO. This belief is supported by educational research in the area of peer assessment. Johnston and Miles (2004) found that students took peer assessment seriously and Pope (2005) found that both peer and self assessment, contribute positively to a student’s course performance. Johnston and Miles (2005) further discovered that peer assessment allowed students to learn about their own effectiveness in a group setting and Somervell (1993) found that peer assessment helped promote independent, reflective and more critical learners. Peer assessment also helps to motivate participation and foster student initiative to learn (Rafiq and Fullerton, 1996).

31

Ongoing studies are also being done using PeerWise. PeerWise is a system that fosters student ability to contribute to test questions, provide explanations, answer questions contributed by other students, rate questions for difficulty and quality, and participate in on-line discussions of all these activities. In 2008, Denny et al. added a feature whereby students can recommend the contributions made by another student through a ratings mechanism (Denny et al. 2008). As a specific design element in this research, I explore the construction and implementation of a reputation system for students using the CCO. My tool extends the current notion of peer ratings in education to allow students to rate the content of their peers, not only for group activity, for all individual blog contributions as well. Enhancing OLC Connections: Recommender System As noted in Chapter Two, one of the problems our software has experienced is a tremendous drop-off in ongoing usage after each semester (refer back to Figure 1). The CCO has a growing population of users (750 users as of August 2008). Of these users, only 115 (or 15%) have more than one connection. Additionally, of these 115 users, the average number of connections is a shade below five (4.9 per user). Yet, there appears to be some correlation between connections and site retention and over half of these users remain active across the CCO (refer back to Figure 2). Consequently, if the CCO can foster new connections, it may also be possible to increase overall system retention. Existing research places a ceiling on interface design, stating that design can take an online social network (OSN) only so far and although developers can control the design of an OSN, it becomes more difficult, if not impossible, to control social 32

interaction across the OSN (Preece, 2001). However, innovative design can help foster social interactions across a site, and new connections can invoke a feeling of freshness for the system, providing a user with something new (e.g., blogs) or someone new (e.g., users) to interact with. One way to foster this fresh start is to automatically generate recommendations through a recommender system. Thus, the proposed recommender system uses blog ratings from the reputation system to recommend new potential connections for site users based on some degree of similarity that exists between ratings.

Collaborative-based Recommender Systems A recommender system attempts to present individuals with information to help them decide what products or services to choose (Shafer et al., 2001). Prior research has grouped recommender systems into three categories, determined by the kinds of input each recognizes and the recommendations they produce (Burke, 2002; Adomavicius and Tuzhilin, 2005; Lo, 2006). A popular category of recommender systems is the collaborative-based model, and it is used by popular sites such as YouTube and Netflix. Collaborative filtering selects products or services based on users’ evaluations of those same products or services (Funakoshi, 2000; Lo, 2006). Such techniques are now commonplace across many ecommerce sites offering consumers automated assistance in finding products and services (Schafer et al., 2001). Collaborative-based recommender systems are best illustrated by Figure 9 (Sarwar, 2000), which shows customers providing information via an online system. Subsequently, computer algorithms calculate and present consumers with a list of recommendations based on those or similar items that the user has rated in the past. This 33

is done through use of various backend system components such as a ratings database and a correlation database. Figure 9 ­ Collaborative­based Recommender System (Sarwar et al. 2000) 

Although this model is good for recommending items to individuals, Terveen and McDonald (2005) argue that recommending social connections is social matchmaking and fundamentally different from providing product recommendations. However, Adomavicius and Tuzhilin (2005) argue that in both cases, a recommender system, or social matching system, attempts to address the fundamental problem of information overload by helping users search a wide variety of content of which they have no firsthand knowledge. Furthermore, recommender systems have been used within existing social contexts as well. Zhang and Hiltz (2003) implemented a system feature to recommend users of an online research community to other users who shared similar interests based on their user preferences. Their goal was to turn lurkers into active participants. Dan-Gur and Rafaeli (2006) discovered that acceptance of recommendations among users of social collaborative systems depended on the type of group that made the recommendations and on the users' involvement in the formation of that group. However, user-based collaborative filtering systems are not without disadvantages. 34

Issues Facing Collaborative Filtering Recommender Systems There are a number of issues facing collaborative-based recommender systems; primarily relating to trust, cold starts and sparse data. A key ingredient for a successful OLC is trust. Trust amounts to the extent to which the community is a valid source of knowledge and that the community is also a safe and reliable place for interaction (Davenport and Prusak, 1998; Preece, 2001). Consequently, if an existing system lacks a measure of trust, so, too, will any recommendation system. Without trust in the CCO, users will have little confidence in recommendations it makes. In earlier research we measured various levels of system trust and discovered that high levels already existed across the CCO (Thoms et al., 2007). Since the CCO is CGU’s specific online learning community, there are few, if any, outsiders that participate and have access to the software providing individuals with a relatively safe operating environment. Additionally, there are varying degrees of access any individual or community can establish, closing off access to outsiders and/or restricting access to only select individuals. A second issue with recommender systems, and one that the CCO cannot avoid, at least in its initial phases, is handling new users. Also known as the cold start problem, a collaborative-based algorithm will not have initial data to recommend users who have not had experience using the system and rating blog posts (Schein et al., 2002; Haruechaiyasak et al., 2004, Lam et al., 2008). The recommender system algorithms rely on a history of user ratings, and so will be unable to recommend connections until feedback is provided to the system from the user.

35

Lastly, recommender systems often face a problem of sparse data. An ongoing struggle for many online communities, including the CCO, is maintaining a critical mass of users. First conceptualized by Hiltz and Turoff (1978), a critical mass considers the required level of activity needed for the sustainability of any interactive environment. An online community can thrive only if there are sufficient people and enough activity to make it attractive and worthwhile (Palloff and Pratt, 1999; Harrison and Zappen, 2005). Consequently, this notion of a critical mass returns us to the degree of social capital and social presence that exists within the CCO. A goal of this research is to combat sparse levels of activity with a system design that encourages users to participate through a quick and easy rating system, thus maximizing the potential for forming connections. More specifically, a successful recommendation system should be directly proportional to the number of blog ratings that exist.

Existing Recommender System Experiments While there are a number of experiments ongoing that implement ratings systems and recommender systems, the project currently undergoing at Digg is closest to my research. Digg is a popular blog aggregator and uses a “thumbs up” or “thumbs down” user feedback mechanism to rate blogs. An interesting feature of Digg is that it can be integrated into the web-browser, which allows individuals to rate content from across the web, which Digg then aggregates. Digg launched a recommender system (Figure 10) in summer 2008, around the same time I built my systems. The recommender system requires individuals to have a login and tracks ratings entered by users for later comparison. 36

Figure 10 – Digg Recommendations Page 

When Digg launched its recommendation engine, Kevin Rose (2008) said the following in an online press release: “We’re launching the Digg Recommendation Engine beginning this week. The feature will be in beta and presented to registered Digg users first, based on a random sampling of logged-in users. Look for the red beta flag on your Upcoming tab - this means you now have access to the Recommendation Engine. We hope to roll it out to everyone within the week or so. The Recommendation Engine is a cool way to discover new content on Digg. Now that there are more than 16,000 stories submitted to the Upcoming section every day, it’s difficult to sort through everything to find the best content. The Recommendation Engine uses your past digging activity to identify what we call Diggers Like You (who you can see on the right hand nav) to suggest stories you might like. “

Digg also published initial results of the recommender engine after thirty days of usage (Kast, 2008). They discovered positive correlations with site usage including: 1. Digging activity is up significantly: the total number of Diggs increased 40% after launch. 2. The system is generating over 54 million recommendations, with the average Digger having nearly 200 recommendations from an average of 34 “Diggers like you”. 37

3. Friend activity/friends added is up 24%. 4. Commenting is up 11% since launch. The project at Digg shows how ratings can be used to enhance the blogging experience by recommending users to blog posts that match their individual preferences. Discussed in detail in Chapter Four, I mirror the model used by Digg and provide users with a mechanism to rate blog posts, using these blog ratings as a mechanism for user recommendations.

38

CHAPTER FOUR – ARTIFACT CONSTRUCTION In this section, I discuss the system designs and construction for both the blog reputation system and user recommender system. The software was coded using elements of PHP, HTML, JavaScript, CSS, AJAX and MySQL configured to run on an Apache web server. The software contains over 1800 lines of new code added to the Elgg open source software libraries. Using a basic COCOMO-style calculator, available online from the University of Southern California’s website, the construction of both the recommender and reputation system was estimated to have an effort of 3-person- months of development over a schedule of 3.67 months (COCOMO 81 Intermediate Model Implementation, 2008).

Reputation System Design Designing a reputation system for the CCO began with the rule of thumb, “provide a simple way to allow individuals to provide feedback on a blog post.” While there are a number of different design patterns available, I follow the pattern encoded by the Yahoo! Developer Network (2008). Additionally, for the look and feel of the interface, I mirror Amazon’s interface design. Amazon attracts over 615 million visitors annually and maintains a database of hundreds of thousands of ratings for everything from books to food. Amazon provides more than a basic mechanism to rate items (Figure 11). Amazon also provides a breakdown of a product’s ratings. I believe this added design features will be beneficial for CCO users since it provides users with insight into how other users rated that entry.

39

Figure 11 ­ Amazon Review System 

Reputation System Design Choices 1) The CCO rating system builds on the existing open source component, (Unobtusive) AJAX Rating Bars v 1.2.2 (March 18 2007), available via (http://masugadesign.com/the-lab/scripts/unobtrusive-ajax-star-ratingbar/). 2) Individuals cannot rate their own blogs. Research is clear stating that individuals are unable to provide unbiased opinions about their work (Goffin and Anderson, 2007; Kaufman et al., 1999). 3) Similar to sites such as Amazon, ratings are on a scale of 1-5 stars, where 1 star is the lowest rating. I expect individuals to understand that 1 Star, represents a low rating and 5 Stars represents a high rating. Upon hovering over a star a pop-up will indicate the number of stars out of 5 (e.g., 1 out of 5) (shown in Figure 12).

40

4) A complete breakdown of ratings becomes available when hovering the Avg. rating (see Figure 13). 5) Ratings are displayed and editable at the individual blog level and the blog summary. 6) Ratings are displayed below the blog post title. 7) If a user has not rated a blog, the text “Rate this blog” appears next to the stars (shown in Figure 12). 8) If a user is not logged in, “Log in to rate this blog post” appears next to the stars. 9) To avoid ballot-stuffing (Bhattacharjee and Goel, 2005; Krukow et al., 2005), an important factor in abuse and mistrust, users can rate a blog post once and users who are not logged into the system cannot rate a blog post. 10) If a user has already rated a blog post, that user can change his or her vote by selecting a different rating. 11) Ratings uses AJAX to eliminate page refreshing. 12) A user can delete a blog post subsequently deleting those ratings from his or her profile. 13) However (continued from 10), ratings are not deleted from the database when a blog post is deleted for the purpose of creating recommendations based on those ratings.

41

Figure 12 ­ Reputation System Interface (Rate This Blog) 

Figure 13 ­ Reputation System Interface (Ratings Drill Down) 

42

Recom mmender System Design The recommendeer system buuilds upon ongoing o worrk by Nathaan Garrett. In I his foolio plug-in for Elgg (Gaarrett, 2009)), he has connstructed a basic b recomm mender agennt that reecommends other folio pages basedd on system generated tags. For myy blog-basedd user reecommenderr system, I have h chosen a different design d (depiccted by Figuure 14), provviding users with a specific page (or module) m froom which to view reecommendattions. A Additionally, mmender sysstem has some basic intelligencee for identifying the recom m matches across the CCO. Figure 1  14 – Recomm mender System Design   

433

Collaborative-based Design Algorithm A number of recommender system approaches exist, including content-based, collaborative-based and hybrid models. The second method was chosen for the simple reason that it will be easier to request input from individuals, at least initially. A more advanced design can occur in subsequent iterations to use natural language processing to filter tags and parse through user-entered text. The collaborative-based recommender system operates on the basic heuristic that people who have rated items similar in the past will tend to rate items similar in the future and therefore have similar tastes. This is a heuristic common across the field (Adomavicius and Tuzhilin, 2005; Herlocker et al. 2004; Resnick et al., 2000). Also known as memory-based algorithms, a collaborative filter will scan the entire user database to make recommendations, utilizing statistical measures to discover new connections with a history of similar tastes to those of the active user (Sarwar et al., 2001). This algorithm is also known as like-minded users or nearest neighbor algorithms. A common, but rudimentary, statistical measure for discovering recommendations is through weighted sum calculations (Adomavicius and Tuzhilin, 2005). Although this approach was used, initially, during construction and piloting, it was modified to use Pearson’s correlation coefficient (PCC) to coincide more closely with recommender systems used in ecommerce (Herlocker et al., 1999). Lathia et al. (2008) have shown that a modified PCC, known as a constrained-PCC, can be equally effective in determining recommendations. Consequently, rather than calculating the PCC with a user’s mean rating, the constrained-PCC utilizes the midpoint of the rating scale (which in my recommender system is 3). One reason why I do not rely on the mean-based PCC is due 44

to the fact that it may occur that a user rates all blogs (or at least those co-rated by another individual) the same (1, 2, 3, 4 or 5), therefore eliminating any standard deviation, consequently eliminating the possibility for recommendations. Thus, since it is quite plausible for an individual to only rate items they like, or dislike, I rely on the midpoint (or constrained-PCC) rather than the mean. For interface design, the goal is to create an easy to use, non-threatening interface, similar to existing systems such as Facebook™, shown in Figure 15. In this presentation, users are shown the name and icon of the individual the system has calculated as a potential match. The user then has the option of adding that user as a friend, or removing the match from the list of matches. Figure 15 ­ People You May Know on Facebook (Facebook™) 

Recommender System Design Choices 1) Recommendations will be determined through Pearson’s constrained correlation coefficient.

45

Where: Ki= user’s blog post rating ~ K = median rating score for K Li= potential user connection’s blog post rating ~ L = median rating score for L i = L’s number of co-rated blog posts with K 2) Coefficients are converted to percentages to be more relevant for users. This works since the PCC coefficient is always positive (for recommendations) and is never greater than 1.00. 3) Recommendations follow the Elgg system interface design where user recommendations are displayed similar to how friends and communities are displayed (see Figure 16). This is also similar to the Facebook’s interface (see prior Figure 15). 4) Ratings of 3 are disregarded since they are neutral and have the potential to crash the algorithm since no rules are in place to handle the possibility of zeroed-out denominators as a result of a user rating all items with 3. In the future, additional heuristics to accommodate neutral ratings can be added.

46

5) The algorithm filters out neighbors with a similarity of less than 0.1 to prevent predictions being based on very distant or negative correlations. This is consistent with literature (Lathia et al., 2008; Mobasher et al., 2007). 6) Deleting a blog post also removes any reference to the ratings for that post. However, those ratings are still stored and used for comparison to find matches across the site. 7) A link to recommendations appears on the front page to provide users direct access to the new site feature. Figure 16 ­ Recommendations Interface 

In the next chapter, I discuss plans to test out these new systems across a live user population.

47

CHAPTER FIVE - RESEARCH DESIGN Quasi-Experimental Research Action Design Research makes it difficult to set up a random controlled trial (RCT) experiment in a live field setting. Therefore, my research design is categorized as a one-group, pretest-posttest, quasi-experimental design. Similar to a field experiment (Boudreau et al., 2001; Neuman, 2005 pg. 266), I look to measure the effects of two system components on a specific population within an existing organization. While the organization, a graduate school, is not a “naturally” occurring setting, it is pre-existing and baselines do exist for which to compare results. The literature categorizes the evaluation of a reputation and recommender system on an active population as a live user experiment as opposed to an offline analysis (Herlocker et al. 2004). There are many advantages to a live user experiment such as measuring a recommender system for user experience and satisfaction and how one can help in aiding learning and CCO retention. While a system pilot was used to test the recommender component, there was no attempt to improve the precision of the PCC algorithm. The goal, however, was to apply the algorithm within the context of the CCO. OLCs are dynamic environments; therefore the ability to capture individuals’ perceptions within this live environment is critical. There are, however, a number of disadvantages when working with a live site such as lost control over how individuals will manipulate the software, or even if they decide to use it at all. For this reason, and as a precautionary measure, a series of interviews were conducted to facilitate triangulation.

48

Research Hypotheses To answer questions surrounding how a reputation and recommender system can foster a better OLC, I devised a set of hypotheses to test outcomes relating to system retention, learning, motivation, social interaction and community. Retention An important aspect of this research was to develop a system that can help increase retention rates across the CCO. I proposed that both a recommender system and reputation system can help foster retention of CCO users. To test these claims I use hypotheses H1, H2 and H3. • H1: High levels of reputation system satisfaction will have a positive impact on planned continued use. • H2: High levels of recommender system satisfaction will have a positive impact on planned continued use. • H3: An increase in total number of user connections [also see (H9)] will have a positive impact on retention. Learning and Motivation An equally important aspect of this research is the development of technologies that foster aspects of course learning. More specifically, I propose that with the adoption of a reputation system, students will perceive higher levels of course learning and course motivation. To test these claims I use hypotheses H4 and H5.

49

• H4: High levels of reputation system usage will have a positive impact on course learning. • H5: High levels of reputation system usage will have a positive impact on course motivation. Social Interaction and Community Finally, as part of an OLC, a recommender and reputation system should foster aspects of social interaction and community. More specifically, I propose that the addition of both features will produce higher levels of social interaction and build stronger course community. To test these claims I use hypotheses H6, H7, H8 and H9. • H6: High levels of reputation system usage will have a positive impact on social interaction. • H7: High levels of recommender system usage will have a positive impact on social interaction. • H8: High levels of recommender system usage will have a positive impact on community. • H9: Recommendations based on blog ratings will result in more connections made across the CCO.

Pretest and Posttest Analysis The pretest and posttest instruments have been designed for regression and correlation analysis on items associated with retention, learning, motivation, social

50

interaction and community. They captured before and after perceptions of OLCs, blogging, reputation systems and recommender systems. A pretest was used to gather general demographic information as well as to assess an individual’s technical proficiency. Furthermore, the pretest captured aspects of learning and social interaction prior to experiencing the CCO. This was used to analyze users’ perceptions of the technologies being measured prior to their experiencing the software, and for comparing against their levels of perceived usefulness after the course concluded. See Instruments section for a complete breakdown of pretest constructs. A posttest was used after members of each group have completed their course. Furthermore, the posttest captured aspects of learning and social interaction after experiencing the CCO. These results were compared against prior research by Thoms et al. (2008) (2009) and Garrett et al. (2008) and with results from the pretest. See Instruments section for a complete breakdown of posttest constructs. Retention H1:  High  levels  of  reputation  system  satisfaction  will  have  a  positive  impact  on  planned  continued use. 

I tested this hypothesis using a regression analysis of end-user satisfaction with the reputation system and an individual’s plans to continue using the CCO for the specific purpose of reading and rating blog posts in the future. Items used to measure the ratings system and planned continued are: Rating_System_Construct (2, 8, 13, 14) + CCO_Satisfaction(8) •

The ability to rate blogs was useful. 51



Blog ratings offered a great way to exchange feedback with my classmates.



Blog ratings were an excellent addition for this class.



The ability to rate other content (such as wiki pages, files, comments, etc.) should also exist.



My experience using peer ratings was positive.

Rating_System_Construct (15) •

I will continue to use the ratings system to rate content across the CCO.

H2:  High  levels  of  recommender  system  satisfaction  will  have  a  positive  impact  on  planned  continued use.  

I tested this hypothesis using a regression analysis of end-user satisfaction with the recommender system and an individual’s plans to continue using the CCO for the specific purpose of discovering new connections. Items used to measure the recommender system and planned continued use are: Recommender_System_Construct (2, 3, 7, 8) + CCO_Satisfaction(9) •

I found my user recommendations useful.



I looked forward to checking for new user recommendations.



Finding users with similar blog ratings was an excellent way to recommend new connections.



Recommendations for other content (such as blog posts, wiki pages, files, comments, etc.) should also exist.



My experience finding peer recommendations was positive.

Recommender_System_Construct (9) 52



I will continue to use the recommender system to discover new connections at CGU.

H3:  An  increase  in  total  number  of  user  connections  [if  (H9)]  have  a  positive  impact  on  retention.  

I tested this hypothesis using data analysis, or the total number of new connections made (as a result of the recommender system) and continued CCO use after the semester concludes. Data used to measure the recommender system and retention are: Click data   •

Number of new connections made.



Number of active users after the fact. Learning and Motivation

H4: High levels of reputation system usage will have a positive impact on course learning. 

I tested this hypothesis using a regression analysis of how often individuals rated blog posts and their perceived levels of learning. Rating_System_Construct (1) •

How often did you use the conversation.cgu.edu2 to rate blog posts?

Rating_System_Construct (4) •

2

Blog ratings increased levels of learning for this class.

In the survey instruments conversation.cgu.edu refers to the CCO, since many users are more familiar with the URL address as opposed to the acronym CCO.

53

H5: High levels of reputation system usage will have a positive impact on course motivation.  

I tested this hypothesis using a regression analysis of how often individuals used the software against their perceptions of how well the reputation system motivated them to rate other blog posts. Items used to measure the ratings system and course motivation are: Rating_System_Construct (1) •

How often did you use the conversation.cgu.edu to rate blog posts?

Rating_System_Construct (5, 6, 7) •

Blog ratings helped me to think more critically while writing blog posts.



Blog ratings helped me to think more critically while reading blogs posts.



Blog ratings increased my motivation to do a good job. Social Interaction and Community

H6: High levels of reputation system usage will have a positive impact on social interaction.  

I tested this hypothesis using a regression analysis of how often individuals rated blog posts and their perceived levels of social interaction. Items used to measure the ratings system and social interaction are: Rating_System_Construct (1) •

How often did you use the conversation.cgu.edu to rate blog posts?

Rating_System_Construct (3) •

Blog ratings increased levels of interaction with my classmates.

54

H7: A recommender system will have a positive impact on social interaction  

If users are using the recommender system, I propose that they are doing so because they are interested in social greater interaction through the formation of new social connections. I tested this hypothesis using a regression analysis of how often students used the recommender system and student perceptions of social interaction. Items used to measure the recommender system and course learning are: Recommender_System_Construct (1) •

How often did you use the CCO to check for recommendations?

Recommender_System_Construct (4) •

The  user  recommendations  feature  increased  interaction  with  my  classmates.  Community

H8: High levels of recommender system usage will have a positive impact on community.  

If users are keen on the recommender system, I assert that they are also building community in the process, simply because they are trying to find new connections. I will measure how a recommender system can foster community through data analysis, or how often students used the recommender system against student perceptions of class community. Items used to measure the recommender system and community are: Recommender_System_Construct (1) •

How often did you use the CCO to check for recommendations?

Recommender_System_Construct (5)

55



The user recommendations feature was an excellent tool for building community in this class.

H9: A blog­based user recommender system will result in more connections made across the  CCO.  

I will measure how effective blog ratings are in facilitating new connections across the CCO through data analysis, or the total number of recommendation links clicked and how many new connections were formed. Data used to measure the recommender system and increases in user connections is: Click data •

Number of new connections made through recommender system.

Content and Activity Analysis In addition to a pretest and posttest, I will also conduct content and activity analysis using Google Analytics and SQL reporting. A Google module that monitors activity across the CCO was installed in January 2008 (see Figure 17). The tool breaks down activity across a number of areas: page views, length of time on site, user clicks, etc. Additionally, SQL reports were used to collect data on how many new social connections exist (recommender analysis) and how many ratings were made (reputation system). Table 3 provides a list of items tracked during data analysis.  

56

Table 3 ­ Planned Content Analysis  Site Analysis

Site Visits 

 

Blog Count 

Track logons and URL requested. Track number of blog entries, number of comments. Reputation and Recommendation System Plug-in Analysis

Ratings 

Track frequency of rating across the site.

Recommendations 

Track frequency Recommendations (New!) link is clicked. Track number of new social connections made through the recommendations link versus other means.

  Figure 17 – Google Analytics 

Interview Data Interviews helped to better understand what aspects of the system worked or did not work and why. In determining samples for this data, I solicited all CCO individuals indicating interest, on either the pretest or posttest, in participating in a focus group. 57

CHAPTER SIX - IMPLEMENTATION Software Pilot After two months of software development and construction of the research instruments, I piloted each with [SL]². The pilot took place over a three week period beginning in July. Individuals were asked to rate blog items across the site and frequently check back for potential new recommendations. After two weeks, individuals completed the posttest. During this pilot, bugs in the system were identified and the survey instruments were further validated. Population: Graduate Courses at CGU I measured the effects of my systems on graduate courses at Claremont Graduate University (CGU). After piloting identified systems enhancements, the technology was production ready. Pretests and posttests were planned for CGU t-courses being offered during the fall 2008 semester (Table 4). Pretests and posttests were also distributed to select SISAT courses that also used the CCO (Table 5). Table 4 – Fall 2008 T­Courses  Course Number TNDY 401I TNDY 402A TNDY 402I TNDY 402I

Course Title

Instructor

Date / Time

Participation

The Nature of InquiryTransdisciplinary Perspectives Extremism: Transdisciplinary Perspectives Networks, Discourse & the Growth of Knowledge Jazz, Politics & American Culture

Jacek Kugler

Monday 7-10 Thursday 1:00-3:30 Tuesday 4:00-6:50 Monday 4:00 – 6:50

No

Michael Hogg Jed Harris Wendy Martin

58

No Yes Yes

Table 5 – Fall 2008 SISAT Courses  Course Number IS346

Course Title

Instructor

Date / Time

Participation

Social Technologies

Lorne Olfman

Yes

IS366a

Qualitative Research Methods

Ben Schooley

IS305

System Analysis and Design

Terry Ryan

Thursday 4:00-6:50 Tuesday 4:00-6:50 Monday 4:00-6:50

Yes Yes

Since the goal of my research is to foster learning, social interaction and community across our university’s online CoP, I use Daniel et al.’s (2003) guidelines for CoPs to assert that academic courses can be considered fledgling CoPs. Table 6 maps each aspect of a CoP and if and why participating graduate courses adhere to these guidelines. Table 6 – Graduate Courses as CoPs  Requirement Shared interests

Y/N3 Y

Comments Passing the course; receiving a good grade, collaborating, etc.

Autonomy in setting goals

Y

Individuals have the autonomy in expanding or limiting their participation on a number of levels

Common identity

Y

All are members of our specific institution All are masters students or doctoral students

Awareness of social protocols and goals

Y

All are participants in the CGU academic community and also becoming more established in their research and or respective fields;

3

Y=yes, N=No, U=Undecided.

59

Requirement

Y/N

Shared information and knowledge

Y

Comments With the addition of an OLC, yes, individuals have the ability to use the OLC to discover and share information

Awareness of membership

Y

Students register themselves for courses Course syllabi make learning objectives explicit

Voluntary participation

U

While some courses are required, others are electives.

Effective means of communication

Y

Traditional instructor/student based communication OLC provides a number of communication methods

Intervention: Instructor and Student Training Prior to implementation, I met with course instructors to facilitate how each could align their course syllabi with the CCO. For example, if a course would require weekly assignments based on selective course readings, or guest lectures, I would recommend the community blog as an engaging mechanism for individuals to express themselves. At this time, I requested instructors to make use of the new rating system to foster more engaged activity across the class community and showcase how my dissertation hopes to foster new connections through a recommendation system. Wherever possible, I suggested to instructors that they rate blog posts for course credit. Lastly, instructors were trained on other features of the CCO, as consistent with training instructors on the CCO in the past. For user training I (or a fellow [SL]² member) met twice with each class during the first weeks of class. During these first meetings I conducted the pretest and focused on providing a general overview of the CCO technologies. Also during this time I provided a “Getting Started” orientation assignment that provided instructions for users 60

to get logged on. The assignment also walked through setting up a profile, uploading a personal icon, publishing an initial blog entry and joining a course community. Additional instructions were added to showcase how users could discover new connections across the site, including a background on the recommender system and how it functioned (some of these aspects were further showcased in a video tutorial). The second session focused on any issues individuals had logging on and/or setting up their first blog and user profile. I was also available via email and, if needed, for subsequent in-person training sessions. Intervention: Site Layout The CCO was modified to randomly assign CCO users with access to blog ratings (Figure 18) or recommendations (Figure 19). Figure 18 - Elgg User A (Top blog posts) Figure 18 - Elgg User A (Top blog posts) Figure Figure 18 ­ Elgg User A (Top blog posts)

61

Figure 19 ­ Elgg User B (User recommendations) 

The result was an even distribution of users shown recommendations (111 users) and individuals shown blog ratings (111 users) from the homepage. The next chapter details the experimental results of both systems.

CHAPTER SEVEN- RESULTS Data Collection Timeline Data from pretests, posttests, site statistics and interviews was collected during the 2008/2009 academic calendar. This timeframe spanned August 15, 2008 through February 27, 2009.

62

Pretest Data Before implementing the CCO, a pretest was distributed to five courses at CGU, which tracked student perceptions of elements of the online learning environment, resulting in 65 surveys completed. Our population was comprised of 58% male and 42% female with 51% of users under the age of 30 (23% were between 30 and 40 and 22% were between 40 and 50). CCO Ratings System (pretest) A portion of the pretest asked individuals to rate their perceptions as to how a rating system could foster learning and interaction across a five-point numeric scale. Detailed in Table 7, on average, respondents were not optimistic that a ratings system could impact either learning or interaction; 36% of respondents indicated that the ability to rate the work of their peers was important. Additionally, less than half of respondents felt that a rating system would increase learning (29%) or interaction (33%). However, the majority of individuals chose neither to agree nor disagree with these statements (49% and 51%). Table 8 provides a detailed breakdown of responses. Table 7.  Ratings System Breakdown (pretest)  Survey Item

Avg.

StD

n

A rating system for blogs will increase interaction with my classmates.

2.86

0.98

65

A rating system for blogs will increase learning for this class.

2.92

0.96

65

The ability to rate the work of my peers is important.

2.97

1.03

65

The ability for my peers to rate my work is important to me.

2.75

1.10

65

63

Table 8. Ratings System Details (pretest) 

 

1=Strongly Agree, 2= Agree, 3=Neither Agree nor Disagree, 4=Disagree, 5= Strongly Disagree Survey Item

1

2

3

4

5

A rating system for blogs will increase interaction with my classmates.

8%

25%

49%

11%

8%

A rating system for blogs will increase learning for this class.

6%

23%

51%

12%

8%

The ability to rate the work of my peers is important.

5%

31%

37%

18%

9%

The ability for my peers to rate my work is important to me.

14%

28%

34%

18%

6%

CCO Recommender System (pretest) Detailed in Table 9 and Table 10, individuals responded more favorably that a recommender system could increase interaction and help build community. Over half (54%) indicated that a recommender system would increase interaction with their peers and 47% indicated that it could help in building community. More encouraging was the fact that 69% responded that they would be interested in discovering new connections at CGU with 60% of responses stating that they would do so through the CCO. Table 9 provides averages and standard deviations for these results. Table 9. Recommender System Breakdown (pretest)  Survey Item

Avg.

A recommender system will increase interaction with my classmates.

StD

n

2.45

0.97

65

2.55

0.92

65

2.09

0.96

65

2.37

0.99

65

A recommender system will be an excellent tool for building community in this class. I am interested in discovering potential new connections at CGU. I would use a recommender system to discover potential new connections at CGU.

64

Table 10. Recommender System Details (pretest)  1=Strongly Agree, 2= Agree, 3=Neither Agree nor Disagree, 4=Disagree, 5= Strongly Disagree Survey Item

1

2

3

4

5

A recommender system will increase interaction with my classmates.

15%

39%

26%

13%

2%

A recommender system will be an excellent tool for building community in

10%

37%

32%

13%

2%

I am interested in discovering potential new connections at CGU.

28%

41%

17%

7%

2%

I would use a recommender system to discover potential new connections at

17%

43%

22%

13%

2%

this class.

CGU.

Site Statistics

CCO User Activity From August 2008 through December 2008, the CCO had 222 distinct users and averaged 170 active users4 per month. Compared with numbers from spring 2008, where the CCO averaged 305 active users, total numbers of active users declined by roughly 45%. There were 56 active users with more than one connection. While this number, too, is lower compared with 76 active users with more than one connection in spring 2008, percentages were up from 25% in the spring to 33% in the fall. Table 11 details the breakdown of total number of active users, total users with more than one friend and total active users with more than one friend.  

4

An ‘active’ user is a user who has logged in at least once within a given month.

65

T Table 11. Ac ctive User Co ount  Category

Septem mber

Octobeer

Novembeer

Decembeer

Average

Active Users

202

123

180

172

169.25

Users with friends f >1

155

149

151

152

151.75

Active userss with friends > 1

52

59

56

55

55.5

a Figuree 20 and Figgure 21 illuustrate the brreakdown off the active users and active users with more than onee connectionn across the 2008 fall semester. Sim milar to trennds in 2006/2007, active a users decreased d duuring the falll 2008 sem mester (by 155%). The nuumber of active useers with morre than one friend rose from 52 (oor 25% of alll active useers in September) to t finish in December at a 55 (increaasing by 6% % to accounnt for 32% of o all acctive users in n Decemberr). Figure  20 – Active  User Count  (all users)  

250 200

202

180

172

N Nov‐08

Dec‐08

150 123

100 50 0 Sep‐08

Oct‐08

666

Fig gure 21 – Acctive User Co ount (friend ds > 1)

60 58 56 54 52 50 48

59 56

55

52

Sep‐08

O Oct‐08

No ov‐08

Dec‐08

CC CO User Creeated Conteent Illustrrated in Figgure 22, 7000 blog possts were crreated, geneerating 809 blog coomments an nd 623 blogg ratings durring the falll 2008 semeester. Additiionally, 311 new coonnections were w made across the CCO C by 84 unique userrs. Detailed in Table 122, the homepage generated 160 direct blog hits h from 588 distinct useers. 5 Figu ure 22 – CCO O Content Crreation 

800 700 600

809 Connections

Blo og posts

Blog Ratings (total #)

Com mments Creaated

700 623

500 400 300

213

2 259 129

200 100

290 245

0 5

3 311 224 145174

1 148 191 147 97 78 39 9 3

40

7

0 August

5

S September

October

N November December D

Totals

W the exceptiion of Table 13cc, all site statisticcs are aggregatedd without adminn counts (user idss 1, 11 and 6). With

677

Table 12 – Page Activity  Category

Sept-08

Oct-08

Nov-08

Dec-08

Total

Unique

Blog hits from homepage (unique user)

40

15

4

17

-

58

Blog hits from homepage (total)

76

41

26

17

160

-

Recommender hits

59

21

5

25

110

73

Recommender hits from homepage

1

6

0

3

10

9

CCO Ratings Specific Data Data through December (Table 13a) details the number of ratings created across the site. From 700 blog posts, 623 blog ratings and 809 blog comments were created. Table 13a. Blogging Activity (Sept. 2008 – Dec. 2008)  Activity

Aug-08

Sep-08

Oct-08

Nov-08

Dec-08

Avg.

Totals

5

245

224

148

78

140

700

Blog Ratings (total)

213

129

145

97

39

117

623

Blog Ratings (unique user)

13

35

25

14

10

19

64

Comments Created

7

290

174

191

147

162

809

Blog posts

Table 13b provides a breakdown of how individuals rated blog posts across the site. The majority rating for blog posts was 5-Stars (56%) or 4-Stars (27%) while 1-Star ratings (4%) and 2-Star ratings (4%) were much less used. Table 13b. Ratings Breakdown (Sept. 2008 – Dec. 2008) 

Count

1 Stars

2 Stars

3 Stars

4 Stars

5 Stars

25

25

58

167

348

68

Table 13c provides a breakdown of blogs and blog ratings. Of the 700 blog posts created, 503 posts were rated (72%) with 215 posts having more than one rating (31%). It should be noted that excluding administrator ratings, these numbers are 391 (56%) and 120 (17%), respectively. Table 13c. Ratings Breakdown (Sept. 2008 – Dec. 2008)  Blog Posts

Rated Blog Posts

Blog Posts with Ratings > 1

Count (admin ratings included)

700

503

215

Count (without admin ratings)

700

391

120

CCO Recommender Specific Data The recommendations page was viewed a total of 110 times with 10 hits from 9 distinct users coming from the homepage. From the active CCO user population, 311 new peer connections were made by 83 distinct users. However, of these 311 new connections made, 3 users used the recommender system to make these connections resulting in a total of 8 new connections made through the recommender system. In February 2009, these 83 users were still active, having logged on in either January or February.

Posttest Data Posttest questionnaires were distributed to students across the same five graduate courses receiving the pretest. I received 63 usable responses (roughly 37% of system users). Unfortunately, I was unable to link posttest data with pretest data or posttest data with site data. Of the 63 responses received, 29 indicated having used the rating system periodically throughout the semester, 22 indicated having used the recommender system 69

and 15 indicated having used both systems. For data analysis across groups, Table 14 represents the breakdown of system adoption. Table 14. Population Key for Survey Constructs  Population Breakdown (per technology adoption)

Key

n

All Survey Responses

All

63

Respondents indicated using the ratings system.

Rat.

29

Respondents indicated not using the ratings system.

NoRat.

34

Respondents indicated using the recommender system.

Rec.

22

Respondents indicated not using the recommender system.

NoRec.

39

Respondents indicated using both the ratings and recommender system.

Both

15

Respondents indicated using neither the ratings nor recommender system.

None

26

CCO Trends: 2008 versus 2006/2007 Compared with past survey analysis, responses were lower on items related to social interaction. In 2006/2007 57% believed the CCO increased social interaction with peers while 51% agreed or strongly agreed with this statement in 2008. Percentages were down for social learning as well. Responses in 2008 identified 66% of individuals agreeing that the CCO increased learning, down from 81% in 2006/2007. Similarly, on items related to community, 60% of individuals responded that the CCO increased community, down from 82% in 2006/2007. Table 15 represents the averages and standard deviations on CCO-related items. Table 16 provides a complete breakdown of survey responses on CCO-related items.

70

Table 15. CCO Breakdown  Survey Item

Pop.

How often did you use Conversation.cgu.edu this past semester?

Conversation.cgu.edu increased interaction with my peers.

Conversation.cgu.edu increased learning in this class.

71

Avg.

StD.

All

1.82

0.46

Rat.

1.79

0.50

NoRat.

1.88

0.41

Rec.

1.77

0.53

NoRec.

1.87

0.41

Both

1.67

0.62

None

1.88

0.43

All

2.65

1.12

Rat.

2.25

1.08

NoRat.

3.03

1.06

Rec.

2.50

1.14

NoRec.

2.77

1.13

Both

2.33

1.11

None

3.08

1.06

All

2.35

1.13

Rat.

2.07

0.98

NoRat.

2.59

1.21

Rec.

2.27

1.08

NoRec.

2.44

1.17

Both

2.07

0.96

None

2.58

1.24

Survey Item

Pop.

Conversation.cgu.edu increased levels of community in this class.

I plan to continue using Conversation.cgu.edu outside this class.

Avg.

StD.

All

2.55

1.13

Rat.

2.04

0.92

NoRat.

2.97

1.11

Rec.

2.23

1.11

NoRec.

2.77

1.09

Both

1.87

0.83

None

3.00

1.10

All

3.29

1.25

Rat.

2.68

1.19

NoRat.

3.79

1.07

Rec.

3.00

1.27

NoRec.

3.51

1.17

Both

2.60

1.18

None

3.77

1.11

Table 16. CCO (posttest)  1=Strongly Agree, 2= Agree, 3=Neither Agree nor Disagree, 4=Disagree, 5= Strongly Disagree Survey Item

Pop.

1

2

3

4

5

All

21%

76%

3%

-

-

Rat.

28%

69%

3%

-

-

How often did you use Conversation.cgu.edu this past

NoRat.

15%

82%

3%

-

-

semester?

Rec.

27%

68%

5%

-

-

(1=daily, 2=weekly, 3=biweekly, 4=monthly,5=never)

NoRec.

15%

82%

3%

-

-

Both

40%

53%

7%

-

-

None

15%

81%

4%

-

-

72

1=Strongly Agree, 2= Agree, 3=Neither Agree nor Disagree, 4=Disagree, 5= Strongly Disagree All

14%

37%

22%

22%

5%

Rat.

24%

45%

17%

10%

3%

NoRat.

6%

29%

26%

32%

6%

Rec.

18%

41%

18%

18%

5%

NoRec.

13%

33%

23%

26%

5%

Both

27%

33%

20%

20%

-

None

8%

23%

27%

38%

4%

All

22%

44%

14%

14%

5%

Rat.

34%

31%

28%

7%

-

NoRat.

12%

56%

3%

21%

9%

Rec.

23%

45%

18%

9%

5%

NoRec.

21%

44%

13%

18%

5%

Both

33%

33%

27%

7%

-

None

15%

50%

4%

23%

8%

All

16%

44%

13%

24%

3%

Rat.

28%

52%

10%

10%

-

NoRat.

6%

38%

15%

35%

6%

Rec.

23%

55%

5%

14%

5%

NoRec.

10%

38%

18%

31%

3%

Both

33%

53%

7%

7%

-

None

8%

31%

19%

38%

4%

Conversation.cgu.edu increased interaction with my peers.

Conversation.cgu.edu increased learning in this class.

Conversation.cgu.edu increased levels of community in this class.

73

1=Strongly Agree, 2= Agree, 3=Neither Agree nor Disagree, 4=Disagree, 5= Strongly Disagree Survey Item

Pop.

1

2

3

4

5

All

10%

21%

16%

38%

16%

Rat.

14%

38%

17%

24%

7%

NoRat.

6%

6%

15%

50%

24%

Rec.

14%

27%

14%

36%

9%

NoRec.

5%

18%

18%

38%

21%

Both

20%

33%

13%

33%

-

None

8%

4%

15%

50%

23%

I plan to continue using Conversation.cgu.edu outside this class.

CCO Blogging The blogging feature of the CCO has been in place since its inception in 2006. Blogging still remains the most used feature of the CCO; 85% of individuals surveyed were required to use the blog weekly and only 5% reported not to have blogged during the semester. Half of respondents agreed or strongly agreed that blogging increased interaction with peers. Additionally, 65% of respondents agreed or strongly agreed that blogging enhanced levels of learning with 38% agreeing or strongly agreeing that blogging enhanced community. Table 17 represents the averages and standard deviations for blogging-related items. Table 18 represents the complete breakdown of survey responses for blogging -related items.  

74

Table 17. Blogging Breakdown  Survey Item

How often did you use Conversation.cgu.edu to blog?

Survey Item

Blogging increased interaction with my classmates.

Blogging increased learning for this class.

75

Pop.

Avg.

StD.

All

2.19

0.87

Rat.

1.89

0.57

NoRat.

2.47

0.99

Rec.

2.14

0.94

NoRec.

2.26

0.85

Both

1.87

0.64

None

2.42

0.95

Pop.

Avg.

StD.

All

2.68

1.21

Rat.

2.11

1.03

NoRat.

3.15

1.16

Rec.

2.36

1.22

NoRec.

2.82

1.12

Both

1.87

0.92

None

3.00

1.13

All

2.32

1.07

Rat.

1.89

0.79

NoRat.

2.68

1.15

Rec.

2.27

1.20

NoRec.

2.41

0.97

Both

1.87

0.83

None

2.62

1.02

Survey Item

Blogging was an excellent tool for building community in this class.

I plan to continue using Conversation.cgu.edu to blog outside this class.

My experience blogging was positive.

 

76

Pop.

Avg.

StD.

All

2.76

1.11

Rat.

2.11

0.83

NoRat.

3.29

1.03

Rec.

2.45

1.10

NoRec.

2.97

1.06

Both

2.00

0.85

None

3.27

1.08

All

3.53

1.30

Rat.

3.11

1.31

NoRat.

3.94

1.15

Rec.

3.09

1.34

NoRec.

3.85

1.16

Both

2.73

1.33

None

3.96

1.22

All

2.39

0.97

Rat.

2.04

0.90

NoRat.

2.69

0.93

Rec.

2.29

1.06

NoRec.

2.47

0.92

Both

2.07

0.92

None

2.68

0.85

Table 18. Blogging (posttest)  1=Strongly Agree, 2= Agree, 3=Neither Agree nor Disagree, 4=Disagree, 5= Strongly Disagree Survey Item

Pop.

1

2

3

4

5

All

10%

75%

6%

5%

5%

Rat.

21%

69%

10%

-

-

NoRat.

-

79%

3%

9%

9%

Rec.

18%

64%

9%

5%

5%

NoRec.

5%

79%

5%

5%

5%

Both

27%

60%

13%

-

-

None

-

81%

4%

8%

8%

All

17%

33%

22%

19%

8%

Rat.

31%

38%

24%

3%

3%

NoRat.

6%

29%

21%

32%

12%

Rec.

27%

36%

14%

18%

5%

NoRec.

10%

33%

28%

21%

8%

Both

40%

40%

13%

7%

-

None

8%

31%

23%

31%

8%

All

22%

43%

19%

13%

3%

Rat.

34%

41%

24%

-

-

NoRat.

12%

44%

15%

24%

6%

Rec.

32%

32%

18%

14%

5%

NoRec.

13%

51%

21%

13%

3%

Both

40%

33%

27%

-

-

None

8%

50%

19%

19%

4%

How often did you use Conversation.cgu.edu to blog? (1=daily, 2=weekly, 3=biweekly, 4=monthly,5=never)

Blogging increased interaction with my classmates.

Blogging increased learning for this class.

77

1=Strongly Agree, 2= Agree, 3=Neither Agree nor Disagree, 4=Disagree, 5= Strongly Disagree Survey Item

Pop.

1

2

3

4

5

All

14%

24%

41%

13%

8%

Rat.

24%

41%

31%

3%

-

NoRat.

6%

9%

50%

21%

15%

Rec.

23%

27%

36%

9%

5%

NoRec.

8%

23%

44%

15%

10%

Both

33%

33%

33%

-

-

None

8%

8%

50%

19%

15%

All

10%

13%

21%

29%

29%

Rat.

14%

21%

28%

21%

17%

NoRat.

6%

6%

15%

35%

38%

Rec.

14%

23%

23%

23%

18%

NoRec.

5%

8%

21%

31%

36%

Both

20%

27%

27%

13%

13%

None

8%

4%

15%

31%

42%

All

16%

40%

27%

11%

2%

Rat.

28%

41%

21%

7%

-

NoRat.

6%

38%

32%

15%

3%

Rec.

27%

40%

13%

13%

7%

NoRec.

15%

33%

36%

13%

-

Both

27%

40%

20%

7%

-

None

8%

31%

42%

15%

-

Blogging was an excellent tool for building community in this class.

I plan to continue using Conversation.cgu.edu to blog outside this class.

My experience blogging was positive.

CCO Ratings This research looked to specifically measure the impact of two new systems, the first of which was a ratings system. Table 19 represents the averages and standard 78

deviations for ratings system-related items. Table 20 represents the complete breakdown of survey responses for ratings system-related items. These results are discussed further in Chapter Nine. Table 19. Ratings Breakdown  Survey Item

n

Frequency rating blog posts?

The ability to rate blogs was useful.

79

Avg.

StD

All

3.92

1.31

Rat.

2.71

2.61

NoRat.

5.00

0.00

Rec.

3.23

1.41

NoRec.

4.33

1.03

Both

2.40

0.83

None

5.00

0.00

All

3.10

1.10

Rat.

2.61

1.03

NoRat.

3.53

0.96

Rec.

2.5

0.96

NoRec.

3.44

0.94

Both

2.20

0.77

None

3.58

0.90

Survey Item

Pop.

Ratings increased levels of interaction with my peers.

Ratings increased levels of learning for this course.

Avg.

StD

All

3.34

1.01

Rat.

2.96

0.96

NoRat.

3.71

0.91

Rec.

2.68

0.95

NoRec.

3.72

0.79

Both

2.40

0.74

None

3.77

0.82

All

3.42

1.03

Rat.

3.00

1.05

NoRat.

3.79

0.88

Rec.

2.91

0.97

NoRec.

3.74

0.85

Both

2.60

2.40

None

3.81

0.85

All

3.27

1.15

Rat.

2.82

1.06

NoRat.

3.71

1.06

Rec.

2.68

0.95

NoRec.

3.64

1.04

Both

2.40

0.74

None

3.77

1.03

Ratings helped me to think more critically while writing blog posts.

80

Survey Item

Pop.

Avg.

StD

All

3.24

1.20

Rat.

2.61

1.10

NoRat.

3.74

1.05

Rec.

2.45

1.01

NoRec.

3.64

1.06

Both

2.13

0.74

None

3.85

0.97

All

3.50

3.37

Rat.

2.86

1.21

NoRat.

4.12

0.88

Rec.

2.82

1.30

NoRec.

3.90

1.02

Both

2.27

1.03

None

4.12

0.86

All

3.37

1.12

Rat.

2.89

1.21

NoRat.

3.79

1.01

Rec.

2.64

1.00

NoRec.

3.79

0.92

Both

2.40

0.83

None

3.92

0.89

Ratings helped me to think more critically while reading blogs posts.

Ratings increased my motivation to post blogs.

Ratings provided an excellent mechanism to exchange feedback with my peers.

81

Survey Item

Pop.

I was comfortable having my blog posts rated by my peers.

I was comfortable rating the blog posts of my peers.

I felt that my peers’ assessment of my blog posts was fair.

82

Avg.

StD

All

2.66

1.19

Rat.

2.14

1.01

NoRat.

3.15

1.16

Rec.

2.45

1.06

NoRec.

2.82

1.27

Both

2.27

0.96

None

3.23

1.18

All

2.94

1.02

Rat.

2.32

0.77

NoRat.

3.47

0.93

Rec.

2.55

1.06

NoRec.

3.18

0.94

Both

2.27

0.88

None

3.54

0.86

All

2.82

0.93

Rat.

2.29

0.71

NoRat.

3.32

0.84

Rec.

2.50

1.01

NoRec.

3.03

0.84

Both

2.20

0.86

None

3.35

0.80

Survey Item

Pop.

Avg.

StD

All

2.89

0.91

Rat.

2.46

0.74

NoRat.

3.26

0.90

Rec.

2.59

0.96

NoRec.

3.10

0.85

Both

2.33

0.82

None

3.31

0.88

All

3.05

1.15

Rat.

2.71

1.21

NoRat.

3.41

0.90

Rec.

2.36

1.09

NoRec.

3.46

1.00

Both

2.07

0.96

None

3.50

0.91

All

2.84

1.13

Rat.

2.29

0.90

NoRat.

3.32

1.09

Rec.

2.36

1.05

NoRec.

3.10

1.10

Both

2.13

0.92

None

3.42

1.06

When viewing my peers’ blog ratings, I felt that the community’s assessment was fair.

Blog ratings were an excellent addition for this class.

The ability to rate other content (such as wiki pages, files, comments, etc) should exist.

83

Survey Item

Pop.

Avg.

StD

All

3.47

1.25

Rat.

2.86

1.27

NoRat.

3.97

1.00

Rec.

2.77

1.27

NoRec.

3.85

1.09

Both

2.40

1.12

None

4.08

0.93

All

2.75

0.91

Rat.

2.44

0.93

NoRat.

3.00

0.83

Rec.

2.48

0.93

NoRec.

2.92

0.87

Both

2.21

0.70

None

3.00

0.75

I plan to continue to use the ratings system to rate content across Conversation.cgu.edu.

My experience rating blogs on Conversation.cgu.edu was positive.

Table 20. Ratings System (posttest)  1=Strongly Agree, 2= Agree, 3=Neither Agree nor Disagree, 4=Disagree, 5= Strongly Disagree Survey Item

Pop.

1

2

3

4

5

All

2%

21%

16%

8%

54%

Rat.

4%

45%

35%

17%

-

-

-

-

-

100%

5%

41%

14%

9%

32%

-

8%

18%

8%

67%

Both

7%

60%

20%

13%

-

None

-

-

-

-

100%

NoRat. Frequency rating blog posts? Rec. (1=daily, 2=weekly, 3=biweekly, 4=monthly,5=never) NoRec.

84

1=Strongly Agree, 2= Agree, 3=Neither Agree nor Disagree, 4=Disagree, 5= Strongly Disagree Survey Item

Pop.

1

2

3

4

5

All

6%

21%

44%

14%

14%

Rat.

14%

35%

35%

14%

4%

-

9%

53%

15%

24%

14%

36%

41%

5%

5%

-

13%

49%

21%

18%

Both

20%

40%

40%

-

-

None

-

4%

58%

12%

23%

All

2%

19%

37%

29%

14%

Rat.

4%

35%

31%

28%

4%

-

6%

41%

29%

24%

5%

45%

32%

14%

5%

NoRec.

-

3%

41%

38%

18%

Both

-

53%

40%

7%

-

None

-

-

46%

31%

23%

All

3%

14%

35%

32%

16%

Rat.

7%

28%

28%

35%

4%

-

3%

41%

29%

26%

5%

32%

36%

23%

5%

NoRec.

-

5%

36%

38%

21%

Both

-

40%

47%

13%

-

None

-

-

46%

27%

27%

NoRat. The ability to rate blogs was useful.

Rec. NoRec.

NoRat. Ratings increased levels of interaction with my peers.

Rec.

NoRat. Ratings increased levels of learning for this course.

Rec.

85

1=Strongly Agree, 2= Agree, 3=Neither Agree nor Disagree, 4=Disagree, 5= Strongly Disagree Survey Item

Pop.

1

2

3

4

5

All

3%

25%

30%

22%

19%

Rat.

7%

38%

31%

17%

7%

-

15%

29%

26%

29%

5%

45%

32%

14%

5%

NoRec.

-

15%

31%

28%

26%

Both

-

60%

33%

7%

-

None

-

12%

31%

27%

31%

All

5%

29%

25%

22%

19%

Rat.

10%

45%

24%

14%

7%

-

15%

26%

29%

29%

14%

45%

27%

9%

5%

-

18%

26%

31%

26%

Both

13%

53%

33%

-

-

None

-

8%

31%

31%

31%

All

8%

13%

27%

25%

27%

Rat.

17%

24%

31%

17%

10%

-

3%

24%

32%

41%

Rec.

18%

27%

18%

27%

9%

NoRec.

3%

3%

33%

26%

36%

Both

20%

33%

33%

13%

-

None

-

-

31%

27%

42%

NoRat. Ratings helped me to think more critically while Rec. writing blog posts.

NoRat. Ratings helped me to think more critically while Rec. reading blogs posts. NoRec.

NoRat. Ratings increased my motivation to post blogs.

86

1=Strongly Agree, 2= Agree, 3=Neither Agree nor Disagree, 4=Disagree, 5= Strongly Disagree Survey Item

Pop.

1

2

3

4

5

All

3%

21%

30%

27%

19%

Rat.

7%

31%

35%

21%

7%

-

12%

26%

32%

29%

9%

41%

32%

14%

5%

-

8%

31%

36%

26%

Both

13%

33%

47%

7%

-

None

-

4%

31%

35%

31%

All

14%

37%

27%

11%

11%

Rat.

24%

52%

14%

7%

4%

NoRat.

6%

24%

38%

15%

18%

Rec.

14%

50%

18%

14%

5%

NoRec.

15%

28%

31%

10%

15%

Both

20%

47%

20%

13%

-

None

8%

15%

42%

15%

19%

All

5%

30%

40%

16%

10%

Rat.

10%

52%

31%

7%

-

-

12%

47%

24%

18%

14%

41%

27%

14%

5%

-

23%

49%

15%

13%

Both

20%

40%

33%

7%

-

None

-

4%

58%

19%

19%

NoRat. Ratings provided an excellent mechanism to exchange Rec. feedback with my peers. NoRec.

I was comfortable having my blog posts rated by my peers.

NoRat. I was comfortable rating the blog posts of my peers.

Rec. NoRec.

87

1=Strongly Agree, 2= Agree, 3=Neither Agree nor Disagree, 4=Disagree, 5= Strongly Disagree Survey Item

Pop.

1

2

3

4

5

All

5%

30%

49%

8%

8%

Rat.

10%

55%

31%

4%

-

NoRat.

10%

55%

31%

4%

-

Rec.

14%

41%

32%

9%

5%

-

23%

62%

5%

10%

Both

20%

47%

27%

7%

-

None

-

4%

73%

8%

15%

All

3%

29%

51%

10%

8%

Rat.

7%

45%

41%

7%

-

NoRat.

7%

45%

41%

7%

-

Rec.

9%

41%

36%

9%

5%

-

21%

59%

10%

10%

Both

13%

47%

33%

7%

-

None

-

12%

62%

12%

15%

All

10%

19%

41%

16%

14%

Rat.

21%

28%

24%

21%

7%

-

12%

56%

12%

21%

Rec.

23%

36%

27%

9%

5%

NoRec.

3%

8%

51%

18%

21%

Both

33%

40%

20%

7%

-

None

-

4%

65%

8%

23%

I felt that my peers’ assessment of my blog posts was fair. NoRec.

When viewing my peers’ blog ratings, I felt that the community’s assessment was fair. NoRec.

NoRat. Blog ratings were an excellent addition for this class.

88

1=Strongly Agree, 2= Agree, 3=Neither Agree nor Disagree, 4=Disagree, 5= Strongly Disagree Survey Item

Pop.

1

2

3

4

5

All

8%

35%

35%

10%

13%

Rat.

17%

48%

24%

10%

-

-

24%

44%

9%

24%

Rec.

18%

45%

23%

9%

5%

NoRec.

3%

28%

44%

8%

18%

Both

27%

40%

27%

7%

-

None

-

15%

54%

4%

27%

All

6%

17%

30%

17%

29%

Rat.

14%

31%

28%

14%

14%

-

6%

32%

21%

41%

Rec.

14%

36%

23%

14%

14%

NoRec.

3%

5%

36%

18%

38%

Both

13%

40%

33%

7%

7%

None

-

-

38%

15%

46%

All

8%

26%

52%

8%

5%

Rat.

14%

35%

41%

4%

4%

NoRat.

3%

18%

59%

12%

6%

Rec.

13%

40%

33%

7%

7%

NoRec.

8%

13%

64%

10%

5%

Both

13%

47%

33%

-

-

None

4%

12%

69%

12%

4%

NoRat. The ability to rate other content (such as wiki pages, files, comments, etc) should exist.

NoRat. I plan to continue to use the ratings system to rate content across Conversation.cgu.edu.

My experience rating blogs on Conversation.cgu.edu was positive.

CCO Recommendations This research looked to specifically measure the impact of two new systems, the second of which was a recommender system. Table 21 represents the averages and 89

standard deviations for recommender system-related items. Table 22 represents the complete breakdown of survey responses for recommender system-related items. These results are discussed further in Chapter Nine. Table 21. Recommendation System Breakdown  Survey Item

Pop.

Frequency checking for peer recommendations?

I found my peer recommendations useful.

90

Avg.

StD

All

4.05

1.37

Rat.

3.67

1.41

NoRat.

4.45

1.12

Rec.

2.36

0.79

NoRec.

5.00

0.00

Both

2.33

0.82

None

5.00

0.00

All

3.16

1.01

Rat.

2.85

0.91

NoRat.

3.48

1.00

Rec.

2.59

0.91

NoRec.

3.56

0.88

Both

2.33

0.62

None

3.58

0.93

Survey Item

Pop.

I looked forward to checking for new recommendations.

Recommendations increased interaction with my peers.

Avg.

StD

All

3.07

1.04

Rat.

2.78

0.93

NoRat.

3.42

1.09

Rec.

2.45

1.01

NoRec.

3.53

0.88

Both

2.33

0.90

None

3.63

0.97

All

3.17

1.14

Rat.

2.74

0.94

NoRat.

3.65

1.08

Rec.

2.41

1.05

NoRec.

3.69

0.89

Both

2.13

0.83

None

3.83

0.96

All

3.17

1.01

Rat.

2.93

0.92

NoRat.

3.48

1.06

Rec.

2.73

1.03

NoRec.

3.53

0.91

Both

2.53

0.92

None

3.58

1.02

Recommendations were an excellent tool for building community in this class.

91

Survey Item

Pop.

Recommendations were an excellent addition for this class.

Avg.

StD

All

3.07

1.01

Rat.

2.78

0.97

NoRat.

3.42

0.99

Rec.

2.59

1.05

NoRec.

3.44

0.88

Both

2.40

0.99

None

3.54

0.93

All

3.10

1.04

Rat.

2.70

0.82

NoRat.

3.52

1.00

Rec.

2.45

0.91

NoRec.

3.53

0.88

Both

2.20

0.68

None

3.67

0.92

All

2.88

1.06

Rat.

2.41

0.80

NoRat.

3.32

1.08

Rec.

2.23

0.97

NoRec.

3.31

0.89

Both

2.07

0.70

None

3.54

0.88

Finding recommendations based on similar blog ratings is an excellent way to recommend peer connections.

Recommendations for other content (such as blog posts, wiki pages, files, comments, etc) should also exist.

92

Survey Item

Pop.

Avg.

StD

All

3.14

1.06

Rat.

2.62

0.75

NoRat.

3.55

1.12

Rec.

2.41

1.05

NoRec.

3.50

0.97

Both

2.36

0.74

None

3.75

1.03

All

2.80

0.92

Rat.

2.63

0.93

NoRat.

2.97

0.88

Rec.

2.32

1.17

NoRec.

3.03

0.74

Both

2.14

0.86

None

2.96

0.77

I plan to continue discovering new connections at CGU through the recommender system.

My experience finding peer recommendations on Conversation.cgu.edu was positive.

Table 22. Recommender System (posttest)  1=Strongly Agree, 2= Agree, 3=Neither Agree nor Disagree, 4=Disagree, 5= Strongly Disagree Survey Item

Pop.

1

2

3

4

5

All

2%

24%

5%

5%

62%

Rat.

3%

34%

7%

7%

45%

NoRat.

-

15%

3%

3%

76%

Rec.

5%

68%

14%

14%

-

NoRec.

-

-

-

-

100%

Both

7%

67%

13%

13%

-

None

-

-

-

-

100%

Frequency checking for peer recommendations? (1=daily, 2=weekly, 3=biweekly, 4=monthly,5=never)

93

1=Strongly Agree, 2= Agree, 3=Neither Agree nor Disagree, 4=Disagree, 5= Strongly Disagree Survey Item

I found my peer recommendations useful.

Pop.

1

2

3

4

5

All

-

24%

46%

8%

16%

Rat.

-

41%

38%

10%

7%

NoRat.

-

9%

53%

6%

24%

Rec.

-

64%

18%

14%

5%

NoRec.

-

-

64%

5%

23%

Both

-

73%

20%

7%

-

None

-

-

65%

-

27%

All

2%

27%

41%

8

16

Rat.

3%

38%

41%

7%

7%

NoRat.

-

18%

41%

9%

24%

Rec.

5%

68%

14%

5%

9%

NoRec.

-

3%

59%

10%

21%

Both

7%

67%

20%

-

7%

None

-

4%

54%

8%

27%

All

5%

21%

38%

13%

17%

Rat.

10%

31%

38%

14%

3%

NoRat.

-

12%

38%

12%

29%

Rec.

14%

55%

14%

14%

5%

NoRec.

-

-

54%

13%

26%

Both

20%

53%

20%

7%

-

None

-

-

50%

8%

35%

I looked forward to checking for new recommendations.

Recommendations increased interaction with my peers.

94

1=Strongly Agree, 2= Agree, 3=Neither Agree nor Disagree, 4=Disagree, 5= Strongly Disagree Survey Item

Pop.

1

2

3

4

5

All

2%

21%

44%

11%

16%

Rat.

3%

28%

48%

10%

7%

NoRat.

-

15%

41%

12%

24%

Rec.

5%

45%

32%

9%

9%

NoRec.

-

5%

54%

13%

21%

Both

7%

47%

40%

-

7%

None

-

8%

50%

8%

35%

All

3%

21%

48%

8%

14%

Rat.

7%

31%

45%

7%

7%

NoRat.

-

12%

50%

9%

21%

Rec.

9%

45%

32%

5%

9%

NoRec.

-

5%

59%

10%

18%

Both

13%

47%

33%

-

7%

None

-

4%

58%

8%

23%

All

3%

21%

48%

8%

14%

Rat.

7%

34%

45%

7%

3%

Finding recommendations based on similar blog

NoRat.

-

9%

50%

9%

24%

ratings is an excellent way to recommend peer

Rec.

9%

50%

32%

5%

5%

connections.

NoRec.

-

3%

59%

10%

21%

Both

13%

53%

33%

-

-

None

-

-

58%

8%

27%

Recommendations were an excellent tool for building community in this class.

Recommendations were an excellent addition for this class.

95

1=Strongly Agree, 2= Agree, 3=Neither Agree nor Disagree, 4=Disagree, 5= Strongly Disagree Survey Item

Pop.

1

2

3

4

5

All

8%

22%

48%

5%

11%

Rat.

14%

34%

45%

3%

-

Recommendations for other content (such as blog

NoRat.

3%

12%

50%

6%

21%

posts, wiki pages, files, comments, etc) should also

Rec.

18%

55%

18%

5%

5%

exist.

NoRec.

3%

3%

67%

5%

15%

Both

20%

53%

27%

-

-

None

-

-

65%

4%

23%

All

2%

27%

37%

13%

14%

Rat.

3%

38%

41%

10%

-

NoRat.

-

18%

32%

15%

26%

Rec.

5%

55%

23%

9%

5%

NoRec.

-

10%

46%

15%

21%

Both

7%

53%

27%

7%

-

None

-

8%

38%

15%

31%

All

6%

27%

48%

11%

5%

Rat.

10%

31%

45%

7%

3%

NoRat.

3%

24%

50%

15%

6%

Rec.

14%

50%

14%

14%

5%

NoRec.

3%

13%

69%

10%

5%

Both

20%

47%

20%

7%

-

None

4%

15%

65%

12%

4%

I plan to continue discovering new connections at CGU through the recommender system.

My experience using the recommender system was positive.

CCO Usability General CCO usability and satisfaction were tracked. Overall, 57% of individuals reported a positive experience using the CCO with 16% disagreeing, 78% reported that 96

the CCO was easy to use with 11% disagreeing. These results are detailed in Table 23 and Table 24. Table 23. CCO Satisfaction Breakdown  Survey Item

Pop.

Using Conversation.cgu.edu was easy for me.

Avg.

StD.

All

2.12

0.92

Rat.

2.19

0.88

NoRat.

2.09

1.01

Rec.

2.33

1.02

NoRec.

2.05

0.92

Both

2.43

0.85

None

2.08

0.93

All

2.45

1.10

Rat.

2.48

1.05

NoRat.

2.48

1.18

Rec.

2.52

1.03

NoRec.

2.46

1.17

Both

2.43

1.02

None

2.42

1.21

I found it easy to get Conversation.cgu.edu to do what I want it to do.

Survey Item

Pop.

Avg.

StD.

All

2.40

1.04

Rat.

2.41

1.08

NoRat.

2.45

1.06

Rec.

2.38

1.07

NoRec.

2.46

1.07

Both

2.29

1.07

None

2.42

1.06

Interacting with Conversation.cgu.edu was clear and understandable.

97

Survey Item

Pop.

I found Conversation.cgu.edu to be flexible to interact with.

Avg.

StD.

All

2.58

1.09

Rat.

2.44

1.09

NoRat.

2.73

1.13

Rec.

2.48

1.12

NoRec.

2.69

1.10

Both

2.36

1.08

None

2.73

1.12

All

2.25

0.89

Rat.

2.15

0.66

NoRat.

2.36

1.06

Rec.

2.14

0.85

NoRec.

2.33

0.93

Both

2.07

0.62

None

2.38

1.02

All

2.48

1.08

Rat.

2.19

0.96

NoRat.

2.76

1.12

Rec.

2.29

0.96

NoRec.

2.62

1.14

Both

2.00

0.68

None

2.73

1.12

It was easy for me to become skillful at using Conversation.cgu.edu.

My experience using Conversation.cgu.edu was positive.

98

Survey Item

Pop.

Overall, I was satisfied with Conversation.cgu.edu.

Avg.

StD.

All

2.59

1.17

Rat.

2.22

1.09

NoRat.

2.88

1.16

Rec.

2.20

0.95

NoRec.

2.79

1.23

Both

1.92

0.64

None

2.92

1.15

Table 24. CCO Satisfaction (posttest)  1=Strongly Agree, 2= Agree, 3=Neither Agree nor Disagree, 4=Disagree, 5= Strongly Disagree Survey Item

Using Conversation.cgu.edu was easy for me.

Pop.

1

2

All

19%

59%

Rat.

17%

NoRat.

3

4

5

8%

8%

3%

52%

17%

10%

-

21%

65%

-

6%

6%

Rec.

13%

53%

20%

7%

7%

NoRec.

23%

62%

5%

8%

3%

Both

7%

53%

20%

13%

-

None

19%

69%

-

8%

4%

All

14%

48%

16%

13%

6%

Rat.

14%

45%

21%

14%

3%

NoRat.

15%

50%

12%

12%

9%

Rec.

13%

53%

20%

7%

7%

NoRec.

18%

46%

15%

13%

8%

Both

13%

47%

13%

20%

-

None

19%

50%

8%

15%

8%

I found it easy to get Conversation.cgu.edu to do what I want it to do.

99

1=Strongly Agree, 2= Agree, 3=Neither Agree nor Disagree, 4=Disagree, 5= Strongly Disagree Survey Item

Pop.

1

2

3

4

5

All

14%

48%

21%

8%

6%

Rat.

21%

34%

28%

10%

3%

NoRat.

9%

59%

15%

6%

9%

Rec.

20%

40%

33%

-

7%

NoRec.

13%

51%

21%

8%

8%

Both

27%

27%

27%

13%

-

None

12%

58%

15%

8%

8%

All

13%

40%

24%

14%

6%

Rat.

17%

38%

24%

14%

3%

NoRat.

9%

41%

24%

15%

9%

Rec.

20%

47%

13%

13%

7%

NoRec.

13%

33%

33%

13%

8%

Both

20%

40%

13%

20%

-

None

12%

35%

31%

15%

8%

All

14%

54%

21%

5%

3%

Rat.

14%

55%

28%

-

-

NoRat.

15%

53%

15%

9%

6%

Rec.

20%

60%

13%

-

7%

NoRec.

15%

49%

26%

8%

3%

Both

13%

60%

20%

-

-

None

15%

50%

19%

12%

4%

Interacting with Conversation.cgu.edu was clear and understandable.

I found Conversation.cgu.edu to be flexible to interact with.

It was easy for me to become skillful at using Conversation.cgu.edu.

100

1=Strongly Agree, 2= Agree, 3=Neither Agree nor Disagree, 4=Disagree, 5= Strongly Disagree Survey Item

Pop.

1

2

3

4

5

All

14%

43%

24%

10%

6%

Rat.

21%

48%

21%

3%

3%

NoRat.

9%

38%

26%

15%

9%

Rec.

20%

47%

20%

7%

7%

NoRec.

15%

36%

28%

13%

8%

Both

20%

53%

20%

-

-

None

12%

35%

31%

15%

8%

All

14%

41%

14%

17%

6%

Rat.

24%

41%

14%

10%

3%

NoRat.

6%

41%

15%

24%

9%

Rec.

20%

47%

13%

7%

7%

NoRec.

15%

31%

18%

26%

8%

Both

20%

53%

13%

-

-

None

8%

35%

19%

27%

8%

My experience using Conversation.cgu.edu was positive.

Overall, I was satisfied with Conversation.cgu.edu.

Qualitative Posttest Results Qualitative feedback from individuals was also tracked. Similar to questionnaire data, open-ended responses were mixed. The first set of open-ended questions allowed individuals to reflect on any aspects of the ratings or recommender system. Table 25 and Table 26 highlight these responses.  

101

Table 25 – Open­ended Results   Open Ended Ratings Feedback Positive

Negative

Good interface

[Ratings are Arbitrary]

Works great

User unfriendly and counterproductive

Easy to use

Silly and not needed

High usability

Do not see as useful

Looked good but did not see my content rated

Looked good but did not see my content rated

Rarely used but looked and functioned well

Since it was an optional function, not many people

Easy to understand and should definitely be kept

used it Did not use

in the user interface

Rarely used but looked and functioned well Lacks a community feeling Never used or paid attention to it Ratings system was pointless. I cannot conceive why it would inspire community or communication Having real conversation is more valuable

Table 26 – Open­ended Results   Open Ended Recommender Feedback Positive

Negative

High usability

Relocate to Side bar

Engenders positive feedback and connectedness

Never cared to use

among course mates

Did not use

102

A second set of open-ended question allowed individuals to provide recommendations on how either component could be improved. Responses were as follows: •

Possibly let people know if the ratings store who made what rating. People are probably not likely to rate if someone can see it was them who rated. Some reassurance would be all it would need.



Make ratings a requirement, don't let them leave the page without/rating.



I can't see [ratings] being used against blogs.



I suggest by telling lecturers and students about the function.



Get people to use them.

Interview Data Additional qualitative feedback was requested from individuals indicating a willingness to participate in a focus group or interview session. From an initial pool of 24 individuals indicating an interest, I collected two, one-hour interviews. The interviews were semi-structured and utilized data from the posttests to probe more deeply about the system. While two interviews are limiting, they did provide insight into four CGU courses adopting the CCO during the time fall 2008 and spring 2009 semester (both individuals participated in two different courses implementing the CCO).

103

CCO Ratings System An initial question asked whether or not individuals6 were shown blog posts versus recommendations after logging in. User A and User B both stated having access to top rated blogs on their homepage. However, both User A and User B stated that they did not click on blogs from the homepage. User B stated specifically, “I did not read 5-star blogs because I didn’t know those people. I need to know more than that they go to CGU.” User A stated a similar sentiment, “[I] was not interested in random blog posts and [I] looked for recognition in user names.” An interesting insight from User B was a realization that he may have had the concept of the CCO backwards and random blog posts was one way of finding out more about those individuals. User B also felt that the blog posts seemed to be the same blog posts over and over again. This was unfortunate since the blogs were dynamically created based on the frequency of blog ratings. However, since most blogs had no ratings or fewer than three ratings, which was the minimum threshold, many blog never surfaced on the homepage. User A and User B both indicated a lack of instructor support across ratings. While each viewed the ratings as a valid method to increase interaction, they were largely ineffective since most people were not rating. Use A cited that a lack of participation severely limited the impact of the ratings, indicating that with so few ratings, little, if any, assessment of a post could be determined. User B stated that there was no consistency across either course using the CCO and there was only a half-hearted approach to the CCO in general, let-alone blog ratings. User B stated, “The instructor viewed the CCO, as if it was any other instructional technology (i.e., not worthwhile).” 6

To be referenced as User A and User B

104

CCO Recommender System Both users indicated having the ratings component on homepage and were thus unaware that a new recommender system existed. User B also mentioned that CCO training was rushed and unclear, which could have resulted in a truncated explanation of all CCO features, including the two new systems. However, I did inquire further regarding their general interests in a recommender system for the CCO. User A stated that a recommender system seems like an interesting component that should be developed further. In line with the objective to foster connection-making and build community outside of the classroom, User A stated, “Anything you can do to add something outside the class is worthwhile.” When asked about whether or not User B would be interested in system notifications surrounding connections, User B stated, “If I could specify the threshold [to help eliminate excessive noise], I would be interested in automatic feeds or email notifications.”

105

CHAPTER EIGHT – HYPOTHESIS TESTING Randomly assigning the reputation system and recommender system on the homepage resulted in a bifurcation in system usage. As a result, some individuals did not experience the recommender system. Therefore hypothesis testing for H1 focused on the population using the ratings system. Hypothesis testing for H2 and H3 focused on the populations using the recommender system. Hypotheses H4 through H8 were tested using all 63 survey responses and hypothesis H9 was tested through a content analysis. Hypotheses H1 through H8 were analyzed using Microsoft Excel 2007, which provides an optional Data Analysis package supporting regression analysis and correlation analysis. The regression analysis conducted was linear regression through the "least squares" method. The correlation analysis conducted used Pearson’s Product Movement Correlation (PMCC). Retention H1: High levels of reputation system satisfaction will positively impact planned continued use.  

 

H1 asserted that positive ratings system satisfaction would impact planned continued use. Since this model was dependent on system use, I focused on those individuals having rated items across the CCO. Overall, the model was significant at the p

Suggest Documents