Instructional Science 30: 31–45, 2002. © 2002 Kluwer Academic Publishers. Printed in the Netherlands.
31
Beyond intelligent tutoring systems: Using computers as METAcognitive tools to enhance learning? ROGER AZEVEDO University of Maryland, College of Education, Department of Human Development, College Park, MD 20742-1131, USA (E-mail:
[email protected]) Received: 24 April 2001; accepted in final form: 7 June 2001 Abstract. Framed by the existing theoretical and empirical research on cognitive and intelligent tutoring systems (ITSs), this commentary explores two areas not directly or extensively addressed by Akhras and Self (this issue). The first area focuses on the lack of conceptual clarity of the proposed constructivist stance and its related constructs (e.g., affordances, situations). Specifically, it is argued that a clear conceptualization of the novel constructivist stance needs to be delineated by the authors before an evaluation of their ambitious proposal to model situations computationally in intelligent learning environments (ILEs) can be achieved. The second area of exploration deals with the similarities between the proposed stance and existing approaches documented in the cognitive, educational computing, and AI in education literature. I believe that the authors are at a crossroads, and that their article presents an initial conceptualization of an important issue related to a constructivist-based approach to the computational modeling of situations in ILEs. However, conceptual clarity is definitively required in order for their approach to be adequately evaluated and used to inform the design of ILEs. As such, I invite the authors to re-conceptualize their framework by addressing how their constructivist stance can be used to address a particular research agenda on the use of computers as metacognitive tools to enhance learning. Keywords: intelligent learning environments, intelligent tutoring systems, metacognition, self-regulated learning
Learning involves more than just shifts in cognitive states or objects embedded in a computer-based learning environment. Learning is a complex phenomenon that includes an intricate and complex interaction between neural, cognitive, motivational, affective, and social processes. Most educational researchers have traditionally adhered to a specific theoretical framework (e.g., information processing theory) or a philosophical stance (e.g., constructivism). This theoretical attachment has led to several debates among researchers (e.g., Greeno, Anderson, Simon, Reder) regarding the operational definitions of constructs (e.g., symbols, affordances), theoretical perspectives and philosophical stances (e.g., IPT vs. social constructivism), units of analysis (e.g., individual knowledge states vs. sociohistorical accounts of learning), methodological approaches (e.g., verbal protocols and cognitive
32 modeling vs. discourse analysis) and uses of technology for learning (e.g., modelers, non-modelers, and middle-campers; and, learning from and learning with technology). We must acknowledge that these debates are a critical part of the evolution of learning theories and reflect the developmental and cyclical nature of theory building, testing, and refinement. My commentary on Akhras and Self’s (this issue) article is framed by the existing theoretical and empirical literature on cognitive and intelligent tutoring systems (ITSs), and explores two areas not directly or extensively addressed by the authors. The first issue focuses on the lack of conceptual clarity of the proposed constructivist stance and its related constructs (e.g., affordances, situations). I will argue that a clear conceptualization of their novel constructivist stance needs to be delineated by the authors before an evaluation of their ambitious proposal to model situations computationally in intelligent learning environments (ILEs) can be achieved. The second area of exploration deals with the similarities between the proposed stance and existing approaches documented in various cognitive, educational, and computing literatures. Their article (this issue) presents an initial conceptualization of an important issue related to the computational modeling of situations in ILEs. However, conceptual clarity is definitively required in order for their approach to be adequately evaluated and used to inform the design of ILEs. As such, I invite the authors to re-conceptualize their framework by addressing how their constructivist stance can be used to address a particular research agenda on the use of computers as metacognitive tools to enhance learning.
Cognitive research and Intelligent Tutoring Systems (ITSs) One of the few groups to successfully adopt the traditional ITS approach and empirically demonstrate the effectiveness of their theory and tutors is the ACT-R group (e.g., Koedinger & Anderson, 1997). Anderson and colleagues’ ACT-R theory of cognitive skill acquisition (Anderson, 1983, 1993) is based on extensive psychological experimentation and has been used to inform the design of the ACT-R computer tutors (Anderson et al., 1995). Their cognitive research and tutor development is well documented in the literature and reflects the evolution of theory building, testing, and refinement based on empirical evidence from laboratory studies and studies of the effectiveness of the ACT-R tutors. They have been extremely successful in translating their theory into instructional principles for their computer tutors (e.g., Koedinger & Anderson, 1998). Part of the cyclical nature of theory building is exemplified by the group’s recent extension of the ACT-R theory to include both cognitive and “atomic”
33 aspects of learning. The ACTR/PM (Anderson & Labiere, 1998) theory of cognition now includes learning mechanisms that can account for attentional as well as motor aspects of learning. Their new theory is being tested with both real-world tasks (e.g., air traffic controllers) and computer tutors (e.g., using eye-tracking data to examine how students examine the interface of their computerized tutors). The cycle of theory building, refinement, and evaluation, and revision will continue. Their experimental and tutor data will drive them to refine their theory, which will subsequently be used to inform the design of their tutors, and so on. Perhaps some day the ACTRP/M theory may include Newell’s (1989) neural and social bands of human learning. It is critical to highlight the ACT-R group as an “ITS success story” in light of the criticisms raised by Akhras and Self (this issue). First, it is important to recognize the role of the ACT-R group in the development of early cognitive research that was the impetus for the ITS development by focusing on student modeling, knowledge tracing, knowledge representation, tutoring interventions (e.g., flagging), and the role of immediate feedback. As such, I assert that ITSs have been an important part of early cognitive theory and have contributed to our understanding of cognitive processes and learning mechanisms. They have also contributed to the computational modeling of situations (e.g., algebra, computer programming), and have had a tremendous impact on early and contemporary ITS research. It is therefore unfair for Akhras and Self (this issue) to fail to acknowledge the contributions of cognitive theory and ITS research within the educational computing and AI in education community. Second, the ACT-R group’s approach represents a theoretically-driven and empirically-based approach used to inform the design of ITSs. In contrast, the authors (this issue) focus heavily on the computational aspects of modeling situations in ILEs, without identifying a firm theoretical or philosophical grounding. This is a critical issue that stands in stark contrast of the theory-building by the ACT-R group. In fact, their approach is akin to other atheoretical, technological-driven and intuition-based approaches to the design of CBLEs.
Theoretical and empirical basis for the design of CBLEs Akhras and Self (this issue) distance themselves from the traditional ITS approach by proposing a constructivist philosophy of learning, and they raise certain issues which, according to them, should determine the foundation for intelligent learning environments (ILEs). It is difficult to evaluate their “novel” philosophical stance because as I will argue below, lacks conceptual clarity, lacks operational definitions of critical constructs, and is presented as
34 an unfair and incomplete comparison between what has been accomplished (e.g., ACT-R’s cognitive research and tutor effectiveness) and their proposed position (Akhras & Self, this issue). Their attempt to distance themselves from traditional ITS, by focusing on modeling the domain in terms of situations instead of knowledge structures, and evaluating learning process rather than the product, and the notion that opportunities for learning arise from affordances of situations rather than being provided on the basis of teaching strategies, is unclear and lacks conceptual clarity. First, it is difficult to evaluate the adequacy of Akhras and Self’s (this issue) constructivist philosophy of learning because it is incomplete. Although not acknowledged by the authors, their approach is in many ways very similar to the “modelers” approach (see Derry & Lajoie, 1993; Lajoie, 2000) to computer-based learning environments. The “modelers” (e.g., ACTR group) approach is based on the fact that a student’s learning can be detected, traced, monitored, and evaluated by following hi/her problemsolving steps. In the case of ACT-R, this approach is not based on a philosophy of learning, but rather on several mechanisms underlying the ACT-R theory of skill acquisition which make explicit predictions about learning, knowledge representation and use, performance, and transfer. In the both the theory and tutors, these mechanisms work in coordination with an underlying cognitive model of a well-structured domain (e.g., algebra) to ensure that the student learns all the prerequisite skills and ultimately masters the domain. Due to the limitations found with the traditional ITS approach, Akhras and Self (this issue) propose a new constructivist philosophical stance. Their stance emphasizes “different values and may require an entirely different architecture of intelligent system to support its philosophy of learning.” It is my position that their stance needs to be re-conceptualized and tested before it is presented as a novel constructivist approach for ILEs. First, it is unclear why the authors (this issue) fluctuate between the different “flavors” of constructivism, including von Glasersfeld, Piaget, Greeno, Brown, Collins, Duguid, and Jonassen, to name just a few. It is also unclear why we need a “new” constructivist approach, and even if we did, why would one even try to amalgamate certain components of existing constructivist frameworks? The existing constructivist frameworks are quite diversified and do not always share the same view of learning, including the uses of technology for learning. For example, Jonassen’s (2000) constructivist view of computers as mindtools is different from Lajoie’s (1993, 2000; Lajoie & Azevedo, 2000) view of using computers as cognitive tools for enhancing learning. These differences between frameworks and uses of technology for learning must
35 be acknowledge and not overlooked. Similarly, new theoretical approaches and philosophical stances (e.g., Akhras & Self, this issue) need to build on previous existing research and ILEs. Akhras and Self’s (this issue) presentation of their stance reflects several problems mentioned above. First, they borrow terms from several existing frameworks and do not operationally define them. For example, they incorporate several constructs/assertions/statements including, “knowledge is individually constructed from what learners do . . . and cannot be objectively defined, . . . autonomous role of the learner, focus on process.” They need to acknowledge that most of the terms they borrow are derived not only from other constructivist frameworks but also from cognitive theories and models of learning and instruction. For example, many cognitive researchers as well as constructivists view the learner as a constructor of his/her own knowledge (e.g., Mayer & Wittrock, 1996; Chi, 2000); cognitive theory too focuses on the process of learning (e.g., Anderson & Labiere, 1998; Newell & Simon, 1972); and many ITSs model students based on an a detailed analysis of the learning process (e.g., Anderson et al., 1995). Second, the authors need to explicitly define terms and clarify several statements, such as “an ILE should be attuned to features of the learner, the environment, and the interaction between learner and environment that differ in fundamental ways from the features that are relevant to ITSs”. How is “attuned” defined within their framework? What does it mean for an ILE to be attuned to the learner? How is their proposed ILE attunement to the learner different from the type provided by traditional ITSs? Third, it is unclear why the authors (this issue) assert the need to model situations, evaluate learning, and ways of promoting learning by the system. Why should all this be accomplished if it is not theoretically driven, and even it is were, how could the present technological limitations (e.g., natural language processing, interpretation of intonation and gestures) be overcome in order to model situations to their fullest extent? Overall, I would disagree with the authors’ opinion that their constructivist stance (this issue) presents an alternative to the traditional ITS architecture. On the contrary, their stance overlaps with much of what has been called the “modelers” and “middle campers” approaches to using computers as cognitive tools for enhancing learning (see Lajoie & Derry, 1993; Lajoie, 2000, for an extensive review of these two perspectives).
From modeling knowledge structures to modeling situations As with other aspects of their argument, Akhras and Self (this issue) find it necessary to go from modeling knowledge structures to modeling situations.
36 What is a situation? What constitutes a situation? Which components and/or aspects of a situation should be modeled and why? What are the different levels of a situation? How do the different levels of a situation reflect their constructivist stance? Are neural, aperceptual, physical, cognitive, sociocultural, socio-historical, affective, metacognitive, motivational, linguistic, gestural, and self-regulation factors part of a situation? Which of these factors should be modeled and why? How does a constructivist theory account for the different levels of human learning? How do we detect, trace, monitor, and model the complex interactions between factors in a situation? How do these factors interact over time (i.e., during learning “outside” the ILE – in other situations and with other agents) and with repeated utilization of the ILE? These are extremely complex issues which Akhras and Self (this issue) need to begin to address, clarify and explicitly discuss so that we can truly appreciate the contribution of their framework and research on ILEs. Otherwise, it is extremely difficult to determine the value of their contribution and properly evaluate it vis-a-vis existing constructivist frameworks and research on ILEs (for an extensive review see Jacobson & Kozma, 2000; Jonassen, 2000; Lajoie & Derry, 1993; Lajoie, 2000; Shute & Psotka, 1996). Is a sentence, a math problem, a clinical case, or a modeling and simulation tool (e.g., Stella, Model-It) a situation, or is it part of a situation? Does the learner have to be working alone or collaboratively to be part of a situation? What other parts of the “situation” make it a situation? How does the computational modeling of the (part of the) situation correspond to the authors (this issue) philosophical stance? Which parts of a situation can be computationally modeled and why? Does a situation include an individual learner diagnosing a medical case using a computer tutor which is housed in his/her office at a hospital (e.g., Azevedo & Lajoie, 1998)? Does a situation include an experienced nurse solving a complex trauma case using a simulation-based computer tutor housed in the chaotic confines of a surgical intensive care unit (Lajoie, Azevedo & Fleiszer, 1998; Lajoie & Azevedo, 2000)? Does it include a dynamic and evolving external representation of a student’s (internal) mental model of the cardiovascular system? If so, should the ILE also model the variables associated with the student’s ability to regulate his/her own learning of the cardiovascular system (Azevedo et al., 2001; Azevedo, Guthrie, Seibert & Wang, in prep.)? Does a situation include a dozen dyads in a high school science class using a Web-based simulation environment to solve ecology problems, where they have access to several teachers and are surrounded by educational resources (Azevedo, Verona & Cromley, 2001). What role should the teachers play according to Akhras and Self’s stance? Which aspects of the “situation” should the teachers model, and which aspects of the “situation” should the ILE model? How can the
37 ILE model such a complex situation with various levels of complexity? But wait, we still don’t have a theoretically-motivated operational definition of a situation that could begin to address these questions. In addition, I find it peculiar that the authors (this issue) chose a nonacademic, impractical task such as salad-making to illustrate their constructivist stance. Are we reverting back to an earlier generation of cognitive and ITS research where we computationally modeled toy-tasks – meaningless tasks that are easy to model by a computer scientist, but irrelevant to the real-world? I would have rather preferred to see that authors use their constructivist stance to design an ILE that would be relevant to people who live outside the lab – teachers, students, tutors, trainers. For example, how about an ILE that can tutor struggling adolescent readers, and therefore model the tutoring situation which includes the cognitive, motivational, affective states of the tutee, instructional scaffolding techniques used by the tutor, and co-joint knowledge construction activities based on the tutor-tutee interactions (Cromley, 2001)? This would pose a real challenge to the authors’ stance and “bring them out of the lab” to collaborate synergistically with educators and psychologists to tackle the complex issue of modeling of situations (however defined) in addressing educational and professional concerns. Akhras and Self (this issue) revert back to the traditional ITS approach by highlighting the need to incorporate cognitive structures after earlier stating that knowledge cannot be objectively defined. In addition, they also state that an ILE should include a domain model in the form of situations. How is the Akhras and Self (this issue) approach to modeling situations different than existing ILEs which do not model cognitive structures or learning processes (e.g., Ericsson & Lehrer, 2000; Kozma et al., 2000). Furthermore, design approaches to ILEs tend not to include a “modeling” component (Jonassen & Land, 2000) to their environments, mainly because “modeling” is seen as antithetical to the constructivist framework. These issues need to be clarified by Akhras and Self (this issue) in order for one to properly evaluate their constructivist philosophy of ILEs. It seems that the authors have reverted to a traditional ITS approach by stating that domain knowledge structures will be part of situation models and that they will be modeled in terms of objects, relations, and other kinds of structures. It should therefore be noted that this approach has been used extensively by other ITS researchers (including “modelers” and “middlecampers”) where they have modeled domain knowledge as part of a situation model (e.g., algebra problems in ACT-R tutors, medical cases in the SICUN tutor) embedded in computer-based learning environments. In sum, Akhras and Self (this issue) present an initial conceptualization based on a constructivist-based approach to modeling situations in ILEs.
38 However, their ambitious proposal is typical of novel philosophical frameworks – conceptual clarity and construct validation remain elusive. This stands in contrast to the theory-driven and empirically-based approach taken by others such as the ACT-R group. Nevertheless, I do agree with Akhras and Self (this issue) that perhaps ILEs should reason about interactions between the content and the dynamics of learning situations. However, like the ACT-R group, I use theory and empirical data to inform the design of my CBLEs. What is not clear from their article (this issue) is the “what, when, how, and why” related to this issue. My first suggestion is for the authors to clearly and explicitly define their constructivist position and not to try to construct one that is based on an amalgamation of various “flavors” of constructivism (e.g., von Glasersfeld, Piaget, etc.). Conceptual clarity is required in order for their approach to be adequately evaluated and used to inform the design of ILEs. As such, I invite the authors to re-conceptualize their framework by addressing how their constructivist stance can be used to address a particular research agenda that focuses on the use of computers as metacognitive tools for enhancing learning. I briefly sketch the issues in the next section.
The use of computers as metacognitive tools to enhance learning: A theoretically-based and empirically-derived approach As part of my commentary I would like to invite Akhras and Self (this issue) to re-conceptualize their framework by challenging them think about how ILEs can be used as metacognitive tools to enhance learning. My colleagues and I are currently grappling with issues similar to those raised by Akhras and Self (this issue). More specifically, we are interested in the “what, when, how, and why” questions related to modeling a situation – i.e., the phases and processes used by students to regulate their learning with hypermedia and web-based environments. More specifically, we are investigating the role of self-regulation during learning of complex systems (e.g., circulatory and ecological systems) with hypermedia and web-based environments (Azevedo et al., 2001; Azevedo et al., in prep.; Azevedo, Verona & Cromley, 2001). Self-regulated learning has recently been viewed as an emerging issue in educational and psychological research. There are several outstanding theoretical and empirical issues related to learning and the use of adaptive hypermedia systems designed to foster self-regulated learning (SRL). The purpose of this section is to briefly outline a theoretically-based and empiricallydriven research agenda which examines the role of self-regulation in students’ learning with hypermedia and web-based environments. These environments are designed to foster mental model progression of complex systems
39 (e.g., circulatory system) by detecting, tracing, modeling, and fostering self-regulatory skills. Self-regulated learners are generally characterized as active learners who efficiently manage their own learning in many different ways (Winne, 1998; Winne & Perry, 2000; Schunk & Zimmerman, 1994). Self-regulated learning is an active constructive process whereby learners set goals for their learning and then attempt to monitor, regulate, and control their cognition, motivation, and behavior (Pintrich, 2000). Models of self-regulation (e.g., Winne & Perry, 2000; Pintrich, 2000; Zimmerman, 2000) describe a recursive cycle of cognitive activities central to learning and knowledge construction activities (e.g., using a hypermedia environment to learn about the circulatory system). Most of these models propose four phases of self-regulated learning (Pintrich, 2000). The first phase includes planning and goal setting, activation of perceptions and knowledge of the task and context, and the self in relationship to the task. The second phase includes various monitoring processes that represent metacognitive awareness of different aspects of the self, task and context. Phase three involves efforts to control and regulate different aspects of the self, task, and context. Lastly, phase four represents various kinds of reactions and reflections on the self and the task and/or context. Our research on learners’ SRL provides a critical, but yet unexplored issue related to learning with adaptive computer-based learning systems. Foundations for research of self-regulation and hypermedia My colleagues and I have recently begun to investigate the effects of goal-setting conditions (e.g., learner-generated versus experimenter-set), on learners’ ability to self-regulate their learning with hypermedia (Azevedo et al., 2001; in prep.). So far, our research addresses three specific research questions, including: (1) Do different goal-setting conditions influence students’ ability to shift to a more sophisticated mental model of the circulatory system? (2) How do goal-setting conditions influence students’ regulation in a hypermedia environment? (3) What are the qualitative differences in students’ self-regulatory learning in the three goal-setting conditions? Methods. Our studies combine true experimental designs (where students are randomly assigned to several instructional conditions) with a thinkaloud protocol methodology, where participants are asked to verbalize their thinking processes as they learn about the circulatory system using a hypermedia environment. The use of a mixed methodological strategy allows us to determine the effects of various instructional interventions on SRL and to examine the dynamic nature of SRL variables during learning with hypermedia.
40 Results. The results from out initial research-based study aimed at investigating the nature of self-regulated learning (SRL) with hypermedia focus on: (1) shifts in mental models (of the circulatory system) from pretest to posttest, (2) role of multiple representations during learning with hypermedia, (3) coding scheme developed to analyze learners’ self-regulatory behavior, (4) establishing a model of SRL with hypermedia, and (5) understand the dynamics of SRL variables during learning. We have found five clusters of SRL variables used by learners while using a hypermedia environment to learn about the circulatory system including: (1) Planning (planning, sub-goaling, prior knowledge activation, and recycling a goal in working memory), (2) Monitoring (judgement of learning, feeling of knowing, selfquestioning, content evaluation, and identifying the adequacy of information available in the hypermedia environment), (3) Strategy use (selecting a new informational source, searching, summarization, copying information, re-reading, making inferences, hypothesizing, knowledge elaboration, and evaluating the content as the answer to a question), (4) Task difficulty and demands (time and effort planning, help-seeking behavior, task difficulty, control of context, and expectation of adequacy of information), and (5) Interest statement (the learner has a certain level of interest in the task or in the content domain of the task). This brief demonstration of how we as psychologists, working with domain experts, have a theoretically-driven approach to extend existing theories and methods of SRL to study SRL when students learn from hypermedia and web-based environments. Our results will be subsequently used to inform the design of adaptive hypermedia and web-based systems. This leads to the role of computer scientists and AI research in the next phase of our research program – how can we address the issues presented below which have been derived from our empirical data?
Implications of SRL research for the design of adaptive hypermedia environments: Issues and challenges To re-conceptualize their constructivist framework, I invite Akhras and Self to explain how their evolving constructivist stance would apply to our research on self-regulation and learning and design of hypermedia and webbased environments aimed at detecting, modeling, monitoring, and fostering learners’ self-regulated learning. How would they model a situation involving a student using a hypermedia environment to learning about complex systems? Existing SRL models (for an extensive review refer to Boekaerts, Pintrich & Zeidner, 2000) and our research indicate that there are several phases and processes that a student uses when regulating his/her learning
41 about complex systems. How can we model a situation which includes several phases (e.g., planning, monitoring, controlling, and reflections) and processes (e.g., cognitive, affective, motivational) related to self-regulation? What about changing the situation to a dyad learning collaboratively or a tutoring situation? How does this change the nature of a situation? How can the emerging knowledge structures that are being co-jointly constructed be modeled as part of the situation? How would the ILE model nonverbal (gesture and tone used by a tutor and student), internal mental representations of the individual learner (mental model of the circulatory system), and shared emerging representations between student-student or teacher-student? How would the ILE detect, trace, model, and monitor these components of the situation in order to reason about the situation? What other components would be necessary for an ILE to adequately model the different levels of a situation (e.g., what are the implications for the instructional and motivational planners)? How would a constructivist-based ILE detect, trace, and monitor the critical SRL variables used by high self-regulated learners, e.g., planning, sub-goaling, prior knowledge activation, self-questioning, coordination of multiple representations, re-reading, knowledge elaboration, intentional control of time on task, taking advantage of the tools embedded in the hypermedia environment to enhance learning of the instructional material, and motivational aspects related to the learner’s interest in the topic? Which AI techniques could be used to detect, monitor, and model these variables? Akhras and Self’s framework would need account for how the ILE could handle the complexity involved in detecting, tracing, and monitoring these variables during learning. What types and levels of scaffolding methods should be designed for low self-regulating learners? According to our research results, these students typically do not plan their learning activities, fail to set instructional goals, fail to monitor their learning, use ineffective learning strategies, and mange their learning by engaging in lots of help-seeking behavior, since they have difficulty judging task difficulty and fail to integrate new information with existing prior knowledge. So, how do we “expand” the ILE’s components (e.g., student model, instructional model, interface, etc.) to determine if a learner is a low- or high self-regulator and what effects will this determination have on the detection, monitoring, and fostering of learners’ overall self-regulation? How do we make our SRL model “visible” to the learners and flexible enough to allow learners to explore advanced topics related to the circulatory system, including its content and structure. How does the environment adapt and exhibit flexibility during learning? What are the implications of our SRL model in designing the student model, instructional planner, motivational planner, and other system compon-
42 ents which may be needed for the system to detect, trace, monitor, model, and foster self-regulated learning? For example, do we need to build an SRL palette similar to a help system which allows learners to indicate that they do not know how to plan their learning of the cardiovascular system? In this case, should the ILE present a student with a planning net with display a sequence of possible sub-goals that he/she should attempt? What about if the student indicates low motivation (e.g., interest in the task)? How can the ILE detect low self-motivation? Should it ask the student explicitly about his/her motivational state (Lepper et al., 1993; duBoulay et al., 1999) on a regular basis or should the student be aware that there is an on-line motivational palette (part of the SRL pallete), which he/she can access and use to modify his/her current motivational level during learning? And even if the ILE is successful in detecting the learner’s motivational level, then how should the instructional planner and student model react? Should the student be challenged? How do these decisions affect subsequent learning (including learning “outside” the ILE)? None of these questions can be addressed without a theory and evidence. How can we design ways of detecting, monitoring, and fostering shifts in learners’ mental models of the circulatory system? Can we have students create concepts maps and/or drawings which can be used to dynamically assess their existing mental model and which will interact with the other system components? For example, what kind of instructional decisions should be made in the case where the ILE has determined that a student has a sophisticated mental model of the circulatory system but has expressed low interest in the task, versus a learner who has a less-sophisticated mental model but has indicated high interest in the topic, yet lacks the ability to plan his/her learning and is using ineffective strategies (e.g., “blindly” searching the hypermedia environment without any goals)? Would making the learner construct an “external” visual representation of his/her emerging mental model of the domain allow him/her to self-regulate? Can this information provide the system with another “variable” with which to make informed instructional decisions? Would this external representation, which is visible, and accessible, allow the user and others (e.g., peers, teachers) to share, inspect, critique, modify, and assess the learner’s understanding of a complex system? Again, Akhras and Self’s framework needs to include a model of the evolving internal knowledge structures and make them accessible to both the individual learner and the other agents participating in the situation. In sum, Akhras and Self (this issue) need to clearly conceptualize their novel constructivist stance before we can begin to appreciate the contribution of their ambitious proposal to model situations computationally in intelligent learning environments (ILEs). The authors raise several interesting
43 ideas. However, conceptual clarity is definitively required in order for their approach to be adequately evaluated and used to inform the design of ILEs. An invitation has been put forth, based on my research on SRL and learning with hypermedia and web-based environments, to stimulate the authors in reconceptualizing their framework. How can their constructivist stance be used to address a research agenda that focuses on the use of computers as metacognitive tools to enhance learning? These kinds of academic exchanges are fruitful in stimulating collaboration between educators, learning scientists, computer scientists and AI researchers to solve shared problems.
Acknowledgments I would like to thank Patricia Alexander for giving me the opportunity to review the original manuscript and write this commentary. I would also like to thank Fabio Akhras and John Self for the opportunity to comment on the initial conceptualization of their ideas regarding the computational modeling of situations in intelligent learning environments (ILEs). Lastly, I would also like to thank Jennifer Cromley for comments on a previous version of this manuscript.
References Anderson, J.R. (1983). The Architecture of Cognition. Cambridge, MA: Harvard University Press. Anderson, J.R. (1993). Rules of the Mind. Hillsdale, NJ: Erlbaum. Anderson, J.R., Corbett, A.T., Koedinger, K.R. & Pelletier, R. (1995). Cognitive tutors: Lessons learned. The Journal of the Learning Sciences 4(2): 167–207. Anderson, J.R. & Lebiere, C. (1998). The Atomic Components of Thought. Mahwah, NJ: Erlbaum. Azevedo, R., Guthrie, J.T., Seibert, D. & Wang, H. (in prep.). The Role of Learner-generated Goals in Regulating Learning from Hypermedia. Manuscript in preparation. Azevedo, R., Guthrie, J.T., Wang, H. & Mulhern, J. (2001). Do Different Instructional Interventions Facilitate Students’ Ability to Shift to more Sophisticated Mental Models of Complex Systems? Paper to be presented at the Annual Conference of the American Educational Research Association, Seattle, WA. Azevedo, R. & Lajoie, S.P. (1998). The cognitive basis for the design of a mammography interpretation tutor. International Journal of Artificial Intelligence in Education 9(1/2): 32–44. Azevedo, R., Verona, M.E. & Cromley, J.G. (2001, May). Fostering Students’ Collaborative Problem Solving with RiverWeb. Paper to be presented at the 10th International conference on Artificial intelligence in Education, San Antonio, TX. Boekaerts, M., Pintrich, P, & Zeidner, M. (2000). Handbook of Self-regulation. San Diego, CA: Academic Press.
44 Chi, M.T.H. (2000). Self-explaining: The dual processes of generating inference and repairing mental models. In R. Glaser, ed., Advances in Instructional Psychology: Educational Design and Cognitive Science (vol. 5), pp. 161–238. Mawah, NJ: Erlbaum. Cromley, J.G. (2001, May). Effective Human Tutoring in Reading: Precursor to the Design of an ITS. Paper to be presented at the 10th International Conference in Artificial Intelligence on Education, San Antonio, TX. Derry, S.J. & Lajoie, S.P. (1993). A middle camp for (un)intelligent instructional computing: An introduction. In S.P. Lajoie & S.J. Derry, eds, Computers as Cognitive Tools, pp. 1–11. Hillsdale, NJ: Erlbaum. Du Boulay, B., Luckin, R. & del Soldato, T. (1999). The plausibility problem: Human teaching tactics in the “Hands” of a machine. In S.P. Lajoie & M. Vivet, eds., Frontiers in Artificial Intelligence and Applications. Open Learning Environments: New Computational Technologies to Support Learning, Eploration and Collaboration, pp. 225–232. Amsterdam: IOS Press. Erickson, J. & Lehrer, R. (2000). What’s in a link? Student conceptions of the rhetoric of association in hypermedia composition. In S.P. Lajoie, ed., Computers as cognitive Tools II: No more Walls: Theory Change, Paradigm Shifts and Their Influence on the Use of Computers for Instructional Purposes, pp. 197–226. Mawah, NJ: Erlbaum. Jacobson, M.J. & Kozma, R.B. (2000). Innovations in Science and Mathematics Education: Advanced Designs for Technologies of Learning. Mahwah, NJ: Erlbaum. Jonassen, D. (2000). Computers as Mindtools for Schools: Engaging Critical Thinking (2nd ed.). Englewood Cliffs, N.J.: Merrill. Jonassen, D. & Land, S.M. (Eds). (2000). Theoretical Foundations of Learning Environments. Mahwah, NJ: Erlbaum. Koedinger, K.R. & Anderson, J.R. (1997). Intelligent tutoring goes to school. International Journal of Artificial Intelligence in Education 8: 30–43. Koedinger, K.R. & Anderson, J.R. (1998). Illustrating principled design: The early evolution of a cognitive tutor for algebra symbolization. Interactive Learning Environments 5: 161– 179. Kozma, R., Chin, E., Russell, J. & Marx, N. (2000). The roles of representations and tools in the chemistry laboratory and their implications for chemistry learning. Journal of the Learning Sciences 9(2): 105–144. Lajoie, S.P. (1993). Computer environments as cognitive tools for enhancing learning. In S.P. Lajoie & S.J. Derry, eds, Computers as Cognitive Tools, pp. 261–288. Hillsdale, NJ: Erlbaum,. Lajoie, S.P. (2000). Computers as Cognitive Tools II: No more Walls: Theory Change, Paradigm Shifts and Their Influence on the Use of Computers for Instructional Purposes. Mahwah, NJ: Erlbaum. Lajoie, S.P. & Azevedo, R. (2000). Cognitive tools for medical informatics. In S.P. Lajoie, ed., Computers as Cognitive Tools II: No more Walls: Theory Change, Paradigm Shifts and Their Influence on the Use of Computers for Instructional Purposes, pp. 247–271. Mahwah, NJ: Erlbaum. Lajoie, S.P., Azevedo, R. & Fleiszer, D.M. (1998). Cognitive tools for assessment and learning in a high flow information environment. Journal of Educational Computing Research 18(3): 203–233. Lajoie, S.P. & Derry, S.J. (1993). Computers as Cognitive Tools. Hillsdale, NJ: Erlbaum. Lepper, M., Woolverton, M., Mumme, D. & Gurtner, J. (1993). Motivational techniques of expert human tutors: Lessons for the design of computer-based tutors. In S. Lajoie & S. Derry, eds, Computers as Cognitive Tools, pp. 75–105. Hillsdale, NJ: Erlbaum.
45 Mayer, R.E. & Wittrock, M.C. (1996). Problem solving transfer. In D. Berliner & R. Calfee, eds, Handbook of Educational Psychology, pp. 45–61. New York: Macmillan. Newell, A. (1989). Putting it all together. In D. Klahr & K. Kovosky, eds., Complex Information Processing: The Impact of Herbert A. Simon, pp. 399–440. Hillsdale, NJ: Erlbaum. Newell, A. & Simon, H.A. (1972). Human Problem Solving. Englewood Cliffs, NJ: Prentice Hall. Pintrich, P.R. (2000). The role of goal orientation in self-regulated learning. In M. Boekaerts, P. Pintrich & M. Zeidner, eds, Handbook of Self-regulation, pp. 451–502. San Diego, CA: Academic Press. Schunk, D. & Zimmerman, B. (1994).Self-regulation of Learning and Performance: Issues and Educational Applications. Hillsdale, NJ: Erlbaum. Shute, V. & Psotka, J. (1996). Intelligent tutoring system: Past, present, and future. In D. Jonassen, ed., Handbook of Research for Educational Communications and Technology, pp. 570–600. New York: Macmillan. Winne, P. (1998). Experimenting to bootstrap self-regulated learning. Journal of Educational Psychology 89: 397–410. Winne, P.H. & Perry, N.E. (2000). Measuring self-regulated learning. In M. Boekaerts, P. Pintrich & M. Zeidner, eds., Handbook of Self-regulation, pp. 531–566. San Diego, CA: Academic Press. Zimmerman, B. (2000). Attaining self-regulation: A social-cognitive perspective. In M. Boekaerts, P. Pintrich & M. Zeidner, eds, Handbook of Self-regulation, pp. 13–35. San Diego, CA: Academic Press.