Learning Software Project Management on the Web: The Impact of ...

1 downloads 130 Views 190KB Size Report
Software Project Management. Three groups of students studied cases in a lab-session time period using a web-based environment, where question prompts ...
Panhellenic Conference on Informatics

Learning Software Project Management on the Web: the Impact of Question Prompts Pantelis M. Papadopoulos

Stavros N. Ioannis G. Ioannis A. Demetriadis Stamelos Tsoukalas Aristotle University of Thessaloniki, Department of Informatics {pmpapad, sdemetri, stamelos, tsoukala}@csd.auth.gr structured domain of Software Project Management. Results showed that a questioning scheme that focuses students’ attention on important aspects of the learning material can result in better learning outcomes [3]. The current study investigates whether the impact of the same scheme is affected by the mode of study. By “mode of study” we refer to two different aspects of the learning activity: (a) the “prompting mode”, i.e. the way that the questions engage students in processing of the learning material (thinking vs. writing the answers to the question prompts), and (b) the “time restrictions”, that is, the available study time (during which the questioning strategy is expected to have an impact). To extend and complement the findings of the previous study, the present one is conducted in the same context, which is technology-enhanced learning in an ill-structured domain. The same web-based environment for case-based instruction, with the same questioning strategy used in [3] is also used here. In the following, we present (a) the theoretical background of our approach, (b) research design and results, and (c) a discussion analyzing the impact of the question prompts in different modes of study.

Abstract This work focuses on the efficiency of question prompts for supporting students, when learning through cases in an ill-structured domain, such as Software Project Management. Three groups of students studied cases in a lab-session time period using a web-based environment, where question prompts directed students to think on important issues of the case material. The first group studied the cases without the question prompts, the second group studied, while prompted to provide written answers to questions embedded in the cases, and the third group studied and was asked only to think of possible answers for the question prompts. Post-tests did not reveal any significant differences between the three groups. This result is discussed in the light of a previous study, which showed that this kind of prompting may have beneficial impact on student learning in a prolonged study-time setting, where students are able to self-regulate their study activity.

1. Introduction

2. Theoretical background

Ill-structured problems are highly contextualized problems with vagueness and ambiguity dominating some or all aspects of the problem. In our research, we investigate the effectiveness of using technology tools for supporting instruction in ill-structured domains. For this purpose we have developed a web-based environment for learning through cases (eCASE-SPM) and we have conducted field tests using the environment for instruction in the Software Project Management (SPM) domain. In order to engage students in deeper processing of the complex case material, we integrated question prompts, modeling those cognitive processes that are relevant to the processing of contextual information. In a previous study, we investigated the learning effectiveness of this questioning strategy in the ill-

978-0-7695-3323-0/08 $25.00 © 2008 IEEE DOI 10.1109/PCI.2008.21

2.1. The effect of question prompts in learning Question prompts are sets of questions, used to guide and facilitate learning, offering both cognitive and metacognitive support to students [5]. Research indicates that questioning strategies can be highly beneficial for students, helping them in important cognitive functions, such as focusing attention, stimulating prior knowledge, enhancing comprehension, and facilitating problem-solving processes [5]. Researchers expect that using prompts should be successful in supporting also ill-structured problem-solving tasks. Available studies confirm these expectations, showing that supporting students with

191

supporting students in developing critical thinking and increasing their analysis, inference, and evaluation skills [13]. Based on the above, it can be assumed that a questioning strategy that engages students in writing tasks could gain from the benefits of the writing process. The observe-recall-conclude questioning scheme has been studied so far with the requirement that students provide written answers to the question prompts [3]. The requirement for written answers was firstly imposed to avoid what sometimes is reported as a problem with question prompts, namely the failure of students to engage in deeper processing due to superficial engagement with question prompts [6]. However, a questioning strategy that requires written answers to every open-ended question prompt produces a massive volume of student answers, which needs significant time to be assessed by a tutor (since the machine-based analysis of the answer text is not always feasible). It is interesting, therefore, to explore the effectiveness of the same questioning strategy without the writing process, that is, when the students are prompted simply to reflect on the learning material.

question prompts may significantly improve their skills in ill-structured problem-solving as well [e.g. 2, 10]. Question prompts have been used in technologyenhanced learning environments to help direct students towards learning-appropriate goals, such as focusing student attention, modelling the kinds of questions students should be learning to ask, and helping make their thinking visible and thus an object for reflection [e.g. 7, 14].

2.2. Question prompts in case-based learning Case-based learning is a method widely implemented for introducing students in the intricacies of ill-structured domains. When practicing case-based learning (CBL), two instructionally challenging issues need always careful interventions: (a) First, how to help students avoid misconceptions by not oversimplifying the material. (b) Second, the “knowledge transfer problem”, or how to support students apply their knowledge to new problem situations, which may significantly differ from those encountered in the instructional setting. We argue that, in technology-supported environments for case-based learning, students can be effectively facilitated (in relation to the above mentioned objectives) by appropriate question prompts that focus their attention on important contextual issues in the cases they study. In order to construct a domainindependent scheme, we stipulate that the question prompts should trigger those cognitive processes that are relevant to generating the context of a situation (thus “context-generating cognitive processes”). According to Kokinov [9], there are at least three such processes, namely: perception, memory recall and reasoning. It might be beneficial, therefore, for learners who study complex case-based material, to be prompted to (a) focus on important events evident in the situation (thus activating the perception process), (b) relate these events and their impact to what is already known from other similar situations (activating the memory recall process), and (c) reach useful conclusions (activating the reasoning process) based also on the results of the two previous steps. This “observe-recall-conclude” scheme is expected to improve students’ understanding and recall of the cases they study.

3. Overview of the study 3.1. Goal of the study The main goal of the study was to investigate whether the mode of study (prompting mode and time restrictions) has an impact on the effect of a questioning strategy for activating students’ contextgenerating cognitive processes, implemented in a technology-enhanced learning environment for supporting case-based instruction.

3.2. Domain of instruction The domain of instruction was Software Project Management, a domain of considerable complexity and need for knowledge transfer in job-related situations. SPM was chosen because it is hard to teach and learning relies largely on past experiences and project successes and failures. Difficulties in this domain stem from the fact that software processes are not welldefined, their product is intangible and often hard to measure, and large software projects are different in various ways from other projects [11]. In addition, many aspects of SPM are not adequately formalized and involve subjective quantification, e.g. risk prioritization. As a consequence, software managers recall and use their knowledge about projects they have managed (or are aware of) in the past, and base their decisions on management patterns and anti-patterns. It is worth mentioning that this field has been ranked first

2.3. Thinking vs. writing the answers Writing can be beneficial for learning, although these benefits do not present themselves automatically from an isolated writing task [1]. Writing has been used effectively as a tool for constructive learning and

192

among 40 computer science topics whose instruction needs to be intensified in academia because of demands in professional context [8].

4.2. Design In the study we employed three groups, with the prompting mode as the independent variable. Students in the first group had no question prompts (NP group), the second group had to provide written answers to every question prompt (written mode – WM group), and the third group was only advised to reflect on the material and think of possible answers to the question prompts (thinking mode – TM group).

3.3. The eCASE-SPM learning environment For the purpose of this study, we used the eCASESPM, a web-based environment for case-based learning in the Software Project Management domain. Studying in eCASE-SPM involves working with illstructured problems, presented to students as scenarios. A scenario is a problem-case anchoring student learning in realistic and complex problem situations in the field. After presenting the problem, a scenario poses to students some critical open-ended questions (scenario questions), engaging them in decision-taking processes, as if they were field professionals. Before answering scenario questions the learners are guided to study supporting material in the form of advice-cases. An advice-case is a comprehensive case presenting some useful experience in the field that is relevant to the scenario problem. Hence, each scenario in eCASE-SPM is accompanied by a number of advice-cases. Advice-cases are organized in smaller parts (case-frames) each one presenting a domain theme, that is, some meaningful and self-contained aspect of the whole case. For example, an advice-case in eCASE-SPM could possibly be organized in three case-frames, under the titles “The role of end-users”, “Changing requirements” and “Executive support and commitment”, which are considered as important themes in the SPM domain. The used advice-cases were selected and adapted from authentic SPM cases reported in the literature [e.g. 4]. Questions prompts, based on the aforementioned observe-recall-conclude scheme, appear following each case-frame of an advice-case.

4.3. Procedure Students worked individually and proceeded through the study in four distinct phases: pre-test, familiarization phase, study phase and post-test. Each phase was conducted in a computer laboratory and the three groups were always in the same phase. In the pre-test phase, students were asked to complete a prior domain knowledge instrument which included a set of 6 open-ended question items relevant to domain conceptual knowledge. During the familiarization phase, students were instructed to login to the eCASE-SPM environment (a lab computer was assigned to each student) and work on a relatively simple scenario prepared for them, including one short advice-case. In general, students had to read the material in advice-cases and based on that to provide answers to the scenario open-ended questions. No question prompts were presented while studying the case-frames in the advice-cases. Hence, the familiarization phase was the same for the three groups. Students were allowed two hours to complete the activity and the objective of this phase was to become aware and familiarize themselves with the eCASE-SPM user-interface navigation tools. After the familiarization phase the groups continued with the study phase, which was different for each group. The study phase lasted for two hours and the students had to work online on a complex scenario, addressing more domain themes and accompanied by two longer advice-cases with more case-frames. The material was the same for all groups. The familiarization and the study phase had the same conditions for the non-prompted (NP) group: after navigating through all the advice-cases of the scenario, the system allowed students to upload their scenario answers. There were no questions in the advice-cases for the NP group. The students in the written mode (WM) group were prompted and each time they navigated to a new caseframe in an advice-case the observe-recall-conclude questions appeared. The three questions were stated as follows: (a) “What concrete events (decisions, etc.)

4. Method 4.1. Participants The study employed 40 Computer Science students (21 males and 19 females in their 3rd, out of 4, year of studies), who volunteered to participate. The students were randomly assigned in the three prompting conditions (non-prompted: 12 students; written mode: 15 students; thinking mode: 13 students). The groups’ population was affected by practical issues (drop-outs). Students were domain novices and they had never before been typically engaged in case-based learning as undergraduates.

193

conditions, therefore, were comparable regarding students’ prior knowledge. Table 1 presents the results of the students’ performance in the post-test.

imply possible problems during project development?”, (b) “In what other cases do you recall having encountered similar project development problems?”, (c) “What are some useful implications for the successful development of a project?”. Answers for the questions had to be submitted in order for the system to consider finalized the study of each caseframe and eventually allow students to upload their answers to scenario questions. The students in the thinking mode (TM) group had the exact same observe-recall-conclude questions following each case-frame in the advice-cases. However, the students did not have to provide written answers, but they were constantly advised to spend some time in each case-frame thinking of possible answers to the three questions. Thus, the system allowed students to upload their scenario answers, after navigating through all the advice-cases of the scenario. The written post-test session followed and concluded the activity. The post-test was comprised by two sections focusing on: (a) students’ potential for knowledge transfer in a new problem situation, and (b) acquired domain-specific conceptual knowledge. The first section presented a dialogue-formatted scenario. In this scenario, various stakeholders (company CEO, CFO, clients, technicians etc.) were discussing managerial issues of an ongoing software project in an everyday professional context. Students were asked to identify elements in the scenario that might be indicators of inefficient management and suggest resourceful alternatives. The second section included two domain conceptual knowledge questions. The answers to these questions were not to be found as such in the study material but rather to be constructed by combining parts of information presented in various occasions in the case material. Students’ answers in the scenario questions and in the post-test were mixed and blindly assessed, to avoid any biases, by two independent raters. A 0-10 scale was used and raters followed predefined instructions on how to assess each specific item. Eventually each student received two scores: (a) a score for the new scenario analysis (“transfer” score) and (b) a score for answering domain-specific conceptual knowledge questions (“conceptual” score). These scores were calculated as the mean value of the respective, transfer, and conceptual scores provided by the two raters.

Table 1. Students’ performance (0-10)

Transfer Conceptual

NonPrompted (n=12) M (SD) 8.50 (1.43) 6.31 (1.36)

Written Mode

Thinking Mode (n=15) (n=13) M (SD) M (SD) 8.23 (1.43) 7.65 (1.23) 6.20 (2.12) 5.71 (1.71)

Regarding the two dependent measures of the posttest, ANCOVA results showed that the main effect for the prompting mode did not reach a statistical significance (transfer: F(2,36)=1.19, p>0.05; conceptual: F(2,36)=0.43, p>0.05). Students’ pre-test score was used as covariate, but it was not significantly related to the transfer or to the conceptual scores.

6. Discussion As it was mentioned earlier, the learning effectiveness of the observe-recall-conclude questioning strategy has already been proven in a previous study [3]. In that study, two groups of students (16 non-prompted vs. 16 prompted, mirroring the NP and WM conditions respectively) studied three scenarios with five advice-cases (totally) in eCASESPM for a week. The learning material and the posttest, although not exactly the same, were equivalent to those used in the current study. Results analysis, of that previous study, showed that the prompted group scored significantly higher in both transfer (NP: M=7.81, SD=1.32; WM: M=8.81, SD=0.98; t(30)=2.42, p=0.02) and conceptual (NP: M=7.77, SD=1.23; WM: M=8.52, SD=0.72; t(30)=2.08, p=0.04) measures. In the current study, we included a third prompting option (the TM group), to examine whether a middleground approach that uses the question prompts, but does not cause the cognitive overhead of providing written answers, can be beneficial for the students. By contrast, the results analysis in the current study did not reveal any significant main effect for the mode of study in the post-test, with the three groups being comparable, scoring rather high in the transfer and low in the conceptual measures. Further analysis can provide hints for the scores of the three groups. In [3], results showed that in a prolonged self-paced setting the context-generating approach proved to be helpful for the prompted students. In the present study, the post-test was conducted immediately after the study

5. Results Pre-test results analysis (ANOVA) indicated that students were domain novices scoring very low (NP: M=1.73, SD=0.75; WM: M=1.92, SD=1.01; TM: M=1.89, SD=1.31; F(2,37)=0.11, p>0.05). The three

194

phase and benefits for the WM group or differences between NP and TM groups were not detected. The pattern of results in these two studies resembles those emerging from the “spacing effect”. In general, the spacing effect refers to the improvement of memory performance when study repetitions are “spaced” (separated by time or other events) rather than “massed” (in immediate succession) (e.g. [12]). In the [3], students followed a spaced mode of study (they logged in the system several times throughout the week and the post-test followed on the last day) and this resulted in a positive prompting effect for the treatment group. By contrast, the current massed study (continuous study in a restricted time frame and an immediate post-test) did not reveal differences in the learning outcomes of the three groups. However, neither of the two studies involved repetition of the same material, but rather a repetition of a questioning scheme applied on different learning material (the same questions appeared in various cases). Therefore, it is not the case of a typical spacing effect. The explanation we give is that the repetition of the same questions did invoke, in students’ mind, information studied previously (one of the questions indeed asked students to recall similar cases) and this repetition resulted in better learning outcomes for the treatment group when implemented in a spaced manner (the previous [3] study) and not in a massed manner (the current study). Apart from any possible spacing effect, an alternative explanation of the observed outcomes should seriously consider the limited volume of the learning material used in the current time-restricted study (as compared to the material in the previous selfpaced study). The two hours restriction for the study phase forced us to limit the volume of the learning material so that students would be able to study all the advice-cases and still have enough time to answer the scenario questions. Hence, the limited material was possibly another important reason why the effect of prompting became unnoticeable. Due to the limited material, the complexity and the variability of cases were restricted and students in all three groups were able to achieve the same level of performance. Concluding from the results of the two studies, we argue that embedding context-oriented question prompts in technology-enhanced learning environments for learning in ill-structured domains, can help students learn more effectively, when processing the complex material individually. The impact of prompting becomes significant as the complexity of the material increases and the mode of study becomes spaced (as opposed to massed).

7. Acknowledgements This research has been funded partly by the European Network of Excellence “Kaleidoscope” (contract number 507838). Another part of this research has been funded by the Greek-German interstate funding program IKYDA-06.

8. References [1] Ackerman, J.M. (1993). The promise of writing to learn. Written Communication, 10, 334–370. [2] Davis, E. A., & Linn, M. (2000). Scaffolding students’ knowledge integration: Prompts for reflection in KIE. International Journal of Science Education, 22(8), 819–837. [3] Demetriadis, S. N., Papadopoulos, P. M., Stamelos, I. G., Fischer, F. (2007). The effect of scaffolding students’ context-generating cognitive activity in technology-enhanced case-based learning. Computers & Education, (in press). [4] Ewusi-Mensah, K. (2003). Software Development Failures, MIT Press, Cambridge MA. [5] Ge, X. (2001) Scaffolding students’ problem-solving processes on an ill-structured task using question prompts and peer interactions. Unpublished Doctoral Thesis. Available from http://etda.libraries.psu.edu/theses/approved/WorldWideInde x/ETD-75/index.html [6] Greene, B. A., & Land, S. M. (2000). A qualitative analysis of scaffolding use in a resource-based learning environment involving the World Wide Web. Journal of Educational Computing Research, 23(2),151-179. [7] Hmelo, C., & Day, R. (1999). Contextualized questioning to scaffold learning from simulations. Computers & Education, 32, 151-164. [8] Kitchenham, B., Budgen, D., Brereton, P., & Woodall, P. (2005). An investigation of software engineering curricula. Journal of Systems and Software, 74(3), 325-335. [9] Kokinov, B. (1999). Dynamics and Automaticity of Context: A Cognitive Modeling Approach. In: Bouquet, P., Serafini, L., Brezillon, P., Benerecetti, M., Castellani, F. (eds.) Modeling and Using Context. Lecture Notes in Artificial Intelligence, 1688, Springer, Berlin. [10] Lin, X., & Lehman, J. D. (1999). Supporting learning of variable control in a computer-based biology environment: Effects of prompting college students to reflect on their own thinking. Journal of Research in Science Teaching, 3(7), 837–858. [11] Sommerville, I. (2004). Software Engineering 7th Ed., Addison Wesley. [12] Toppino, T.C., Hara, Y., & Hackman, J. (2002). The spacing effect in the free recall of homogeneous lists: Present and accounted for. Memory & Cognition, 30(4), 601-606. [13] Tynjälä, P. (1998). Writing as a tool for constructive learning: Students' learning experiences during an experiment. Higher Education, 36, 209-230. [14] Yelland, N. & Masters, J. (2007). Rethinking scaffolding in the information age. Computers & Education, 48, 362-382.

195

Suggest Documents