Nature and Methods of Learning by Doing
THE NATURE AND METHODS OF LEARNING BY DOING Alan M. Lesgold University of Pittsburgh
Lesgold, A. (2001). The nature and methods of learning by doing. American Psychologist, 56(11), 964-973.
CONTACT: Alan M. Lesgold Dean, School of Education University of Pittsburgh 5T01 WWPH 230 S Bouquet St Pittsburgh, PA 15260 Phone: 412-648-1773 FAX: 412-648-1825
[email protected]
1
Nature and Methods of Learning by Doing
2
Abstract This presentation reviews some of the psychological research on learning by doing and discusses the role that learning-by-doing approaches can play in education and training. It includes a discussion of the author’s implementations of this approach and the lessons learned from these implementations. Improved forms of learning by doing now can be supported by information technologies, and there are prospects for extensions to group learning by doing and group learning from examples in the near future. Table of Contents Abstract................................................................................................................................. 2 Experiences Building Systems to Promote Learning by Doing ................................. 5 The Sherlock Years...................................................................................................... 6 Sherlock 2: Transfer...................................................................................................11 The Intel Experience.................................................................................................13 Learning from Conversations .........................................................................................18 Final Comment..................................................................................................................19 References...........................................................................................................................20 Figures .................................................................................................................................24 Brief Biography of Alan Lesgold....................................................................................28 Selected Bibliography of Alan Lesgold, Chronologically Ordered ..........................31
Nature and Methods of Learning by Doing
In the history of education, the most striking phenomenon is that schools of learning, which at one epoch are alive with a ferment of genius, in a succeeding generation exhibit merely pedantry and routine. The reason is, that they are overladen with inert ideas. Education with inert ideas is not only useless: it is, above all things, harmful…. Let the main ideas which are introduced into a child's education be few and important, and let them be thrown into every combination possible. The child should make them his own, and should understand their application here and now in the circumstances of his actual life. (Whitehead, 1929, P. 14) Whitehead observed that we learn from experience, abstract or codify that experience, and then deal only in the codifications when we teach the next generation. His views on the problems with such an approach have driven much of my work for the last couple decades. On the one hand, we see signs in the academy that starting from abstractions won’t work. For example, it is not unusual for a doctoral program in mathematics to culminate in a few courses labeled with titles like “Introduction to Algebra.” Mathematicians know that the abstractions they use to organize their knowledge cannot be learned except after substantial mathematics experience. On the other hand, we also see remarkable examples of the problems Whitehead raised in his essay. Schooling in the United States is all too often a “mile wide and an inch deep.” Fragmentary topics are taught, quickly memorized, tested, and then forgotten. Also, training courses all too often consist of “theory” first and brief practical experience afterwards. Further, any university course that provides practical experience solving complex problems is pejoratively labeled as “training” and not “education.” My own psychology department subscribes to the view that the right preparation of an applied cognitive psychologist is training in basic
3
Nature and Methods of Learning by Doing
research, which provides a “foundation” upon which later practical efforts can be grounded. This is defensible if the foundation is grounded in experience actually doing basic research, though experience doing applied research would broaden the experiential base for conceptual understanding. While I believe that Whitehead’s view is well supported by psychological research findings, the issue is not completely settled. In particular, it is not yet clear whether or not certain kinds of learning goals are best achieved with an earlier emphasis on abstracted and formally codified knowledge. However, it is quite clear that there is a role for learning by doing and that it is sometimes much more effective than traditional schooling approaches, especially when the goal is for a person to be able to use the acquired knowledge to attack complex, incompletely structured, and novel problems. Such problems are a major component of technical and professional work in our information economy. My work in the past two decades has been driven increasingly by the belief that more learning by doing is needed in most education and training situations. Psychologists’ aversion to learning by doing as the fundamental form of learning is driven in part by an incomplete understanding of some basic psychological principles, especially those relating to transfer. The dominant theory of transfer for the past century has been the “identical elements” view put forward by Thorndike (Thorndike & Woodworth, 1901). Further, extensions to the Thorndike view to account for transfer have tended to see abstraction as the key to transfer (e.g., Laird, Newell, & Rosenbloom, 1987). An alternative view has arisen in the computational study of case-based reasoning (see Carbonell, 1986; Kolodner, 1993; Leake, 1996). While there is a lot of complexity in the research on case-based reasoning, the fundamental method has three basic steps, situation assessment, retrieval of a relevant case deemed most
4
Nature and Methods of Learning by Doing
likely to be useful, and adaptation of the solution in that case to fit the new circumstances. From the point of view of case-based reasoning researchers, then, transfer comes not from identical “elements” but rather from experience with cases that, as a group, cover the range of situations likely to arise in a given domain. While case-based reasoning approaches have proven very useful in the world of machine cognition, there is less explicit testing of such approaches in humans (but see Kolodner, 1994; and Schank, 1995). Nonetheless, the success of case-based reasoning in artificial intelligence suggests that approaches based upon case-based reasoning principles should be effective, something colleagues and I set out to test beginning in about 1984. The fundamental thought we had in mind was that if tough tasks in the real world are handled best via case-based reasoning, then we ought to consider what knowledge is needed to become a good case-based reasoner. Certainly, one needs experience carrying out the three steps of situation assessment, retrieval of relevant cases, and adaptation of past solutions. In addition, though, one needs a mental library of cases, a collection rich enough to come close to most situations in which we expect the student to be able to perform later. With these two kinds of training, high levels of domain expertise should become possible. Much of my work for about ten years was directed at practical implementations of this idea. Experiences Building Systems to Promote Learning by Doing When it really matters, our culture follows the learning by doing scheme pretty well. Consider the coaching of football. People learn to play football by playing football. There are drills on fragments of football play, but they represent focused learning by doing on components that are themselves complex problems. Further, detailed coaching consists in large part of helping players notice the situations they are in and map appropriate cases onto them. So, for example, modern football coaches videotape all practices and games and index the
5
Nature and Methods of Learning by Doing
individual plays so that they can present a player with a collection of recent game experiences and help the player sort out the features that determine which of several cases – or abstractions of cases – is the best guide to a given situation. Put another way, football is learned by experiencing a large number of cases and receiving detailed advice on how to do the situation assessment and adaptation needed to apply old cases to new situations. Certainly, there is abstraction. That is what we hear when we listen to a radio broadcast of a football game. The terms used are terms relevant to situation assessment and adaptation. But, they get their meaning from our experience either playing football or watching it. Few people learn how to comprehend football games by reading a book on the theory of football, and fewer still actually learn to play the game that way. We have a view of this kind of learning that becomes less charitable when we step back and abstract it. We then call it “the school of hard knocks” or “sink and swim.” In reality, all of the application of psychology to the design of learning by doing is in assuring that no one sinks and in developing coaching sufficient to assure that the learner not only doesn’t sink but also refines situation awareness and adaptation capabilities as much as possible given the set of case experiences he or she has.
The Sherlock Years As these ideas about how learning by doing should work began to take shape in my thinking, the United States Air Force, in the form of Dr. Sherrie Gott, provided me the opportunity to learn by doing myself. Sherrie energetically pushed the command structure to support development of an intelligent tutor to help technicians learn a very difficult job, the repair of the elaborate test stations that are used to maintain aviation electronic equipment in the field. Over a period
6
Nature and Methods of Learning by Doing
of more than ten years, Sherrie Gott and I, and teams of great colleagues in both the University of Pittsburgh and the Air Force, worked at developing two iterations of a system we called Sherlock (Gott & Lesgold, 2000) the name was meant to characterize the complex problem solving we were teaching, though during a few rough periods we wondered whether the 7% solution might not have been a better path to follow). It was not uncommon during the initial years of our work for people to insist that intelligent support of learning by doing was beyond the potential of artificial intelligence. Indeed, I used to joke that my colleagues in computer science wrote dissertations proving that what we were doing with Sherlock was theoretically impossible. Actually, if we had followed the standard paradigm for intelligent tutoring systems, it would have been! The standard paradigm for an intelligent tutoring system is to develop a scheme for modeling the student’s knowledge based upon his or her performance and then providing specific advice that fills in knowledge gaps. The most striking example of successfully using such a strategy is the family of mathematics tutors developed by John Anderson, Ken Koedinger, and Al Corbett at Carnegie Mellon University (Anderson et al., 1995; Corbett, Koedinger, & Anderson, 1997). Their tutors work by attempting to match the performance of the student using a set of rules that represent both successful and unsuccessful strategies for solving problems in mathematics. When the system knows which rules are being exhibited by the student and which are not, it is in a position to provide information about additional rules that the student needs to learn. When one shifts from the relatively simple and constrained world of academic course problems to real world problems, the student modeling approach cannot always work. As the number of possible ways of addressing a problems and the
7
Nature and Methods of Learning by Doing
number of collections of knowledge from which a solution can be forged both rise, there is a combinatorial explosion in the amount of rule matching that is needed and in the additional work needed to figure out which rules should be taught or coached next. Not only must a system model the student’s performance, it also must figure out which categories of domain knowledge and strategies the student is trying to apply, lest it suddenly offer advice totally unrelated to the student’s current thinking. We solved this problem in Sherlock by abandoning the basic approach of student modeling. Rather, we invested the machine intelligence in understanding the current problem solution situation rather than the student’s mind. Instead of modeling which bits of knowledge the student had and had not acquired, we modeled what was known from the actions the student had performed so far on a given problem and what kind of strategy an expert might use to proceed from that point. Because we used the expert model of the domain – in the case of Sherlock this was a particular piece of complex equipment – to organize the representation of the device, many aspects of Sherlock worked together to help teach situation assessment and the adaptation of solutions learned from earlier problems. Students gathered information to solve equipment maintenance problems via an interface that was itself organized the way experts organize the system. Specifically, a division was made between switching circuitry and the effects produced with the circuitry switched to a particular configuration, and a further division was made among major categories of switched configuration. Every time the student needed to ask for information, he or she was exposed to a typography of system component types that was helpful to the situation assessment process.
8
Nature and Methods of Learning by Doing
Further, whenever a student asked for advice that was best conveyed via a circuit diagram, that diagram was configured to expose how an expert would be conceiving the apparatus given the actions the student had already taken. Whatever the expert would be thinking about took up more space on the screen and was presented in more detail – exactly the detail that would figure in the relevant expert situation assessment at that instant in the course of problem solution. The need for a complex interface to permit access to many different kinds of information was turned into an opportunity to superimpose lessons in problem solving strategy on top of specific case experiences. And, of course, this was “sink-or-swim” without the possibility of sinking. The student could always keep asking for information that was progressively more explicit in telling what to do next toward a problem solution. Overall, we had built a system that embodied the idea of learning from cases with coaching about situation assessment and case adaptation. It provided partial simulations of the domain in which knowledge was to be used, structuring that partial simulation to reflect expert knowledge that helped in situation assessment. It had an expert model that drove its coaching capabilities, but it did not do student modeling. Rather, it modeled the current problem situation as it was transformed by actions the student took. This permitted more effective situationspecific coaching and was computationally tractable. At another level, though, we had to do a lot of analysis and design work that was more fundamentally psychological rather than computational in character. Specifically, we needed to build a family of progressively more expert models of expertise in the domain, which in turn required a substantial amount of cognitive task analysis. This aspect of the work, perhaps more than any other, benefited greatly from the insights and activity of my colleague Sherrie Gott. Sherrie pioneered a structured interview strategy that was extremely helpful in doing the
9
Nature and Methods of Learning by Doing
10
needed cognitive task analysis (Gott, 1987; Gott, Bennett, & Gillet, 1986; Means & Gott, 1988). This scheme made it easy to reconstruct the entire process of expert solution to a complex problem, including the way in which the expert represented the problem situation and assessed it, the kinds of cases it reminded him or her of, and the ways in which schemes anchored in other cases were adapted to produce a solution plan. So far, I have not addressed two basic issues in applying learning by doing strategies, namely the selection and sequencing of problems and the potential for transfer of learning beyond the immediate scope of the problems set on which learning occurs. The problem sequencing issue was addressed in our first Sherlock effort and was based upon Gott’s task analysis scheme. Specifically, it was possible to derive, from the cognitive task analysis, a progression of cases that required progressively more of the solution strategies experts bring to their work and that built from problems requiring simple application of a strategy to those requiring iterative or recursive application of multiple solution strategies. The results of the first Sherlock venture were quite heartening. It was clear that airmen receiving the learning-by-doing training from Sherlock improved their problem solving capabilities dramatically (see Lesgold et al., 1992). Indeed, a crude and at least partly reasonable account equates the amount of improvement in complex problem solving ability produced by about 20 hours of Sherlock training as equivalent to that produced by about four years of on-the-job experience. This figure, while only partly defensible, raises an important point. If simple onthe-job experience were all that was needed to become a knowledgeable expert, then on-the-job learning should be relatively efficient as a source of expertise. If, in fact, it takes years of on-the-job experience to produce results as good as those
Nature and Methods of Learning by Doing
11
produced by a modest set of simulated and coached experiences, then we need to understand what the special stuff of the coached learning might be. There are multiple candidates. First, as we have noted, it isn’t just experience with cases but rather experience with a sequenced set of cases for which coaching is available and for which every aspect of the interface supports learning situation assessment and representation rules that can leverage real work experience. Second, we note that Sherlock presented a concentration of useful cases in a brief period of time. The real world mostly provides opportunities to do the routine. Expertise involving the non-routine is harder to get from everyday work experience, because the right situations occur rarely, and often they are handled by established experts when they do occur, not by students. In medical education, we concentrate hard cases into teaching hospitals in order to provide a good case mix, and we use grand rounds to assure that situation assessment and solution adaptation are made explicit and salient to students. Different domains will differentially benefit from real-world versus simulation approaches to this.
Sherlock 2: Transfer There were a number of technical problems surrounding the first Sherlock. It ran on very exotic equipment, and it was very difficult to figure out what it really cost, since it evolved out of a research project on cognitive task analysis. More important, though, we did not have a good model for transfer nor did we have any tests to see if Sherlock produced transfer. So, like many a film maker faced with a successful product, we convinced our sponsors to let us produce a sequel, in which we focused our evaluation on transfer issues. The results have been reported elsewhere in some detail (Gott & Lesgold, 2000). We added some tools that permitted students to compare their solutions with those of experts and to get more explanation of differences between their approach and a more expert approach. Figure 1 shows one simple example, a
Nature and Methods of Learning by Doing
12
check list by which the student could evaluate his or her performance, after which the system would display an expert evaluation and explain the reasons for suggesting room for improvement on particular issues. Figure 1 about here Basically, this kind of approach amounts to telling about the principles for situation assessment and adaptation of strategies but anchoring all of this telling explicitly to the case just presented. The learning by doing is supplemented by explication, one way or another, of the experience one has just had. It turns out that this sort of approach works remarkably well, when measured in terms of subsequent capability to actually do complex problem solving. Gott and Lesgold (2000) tested the adequacy of both Sherlock and Sherlock 2 using what Gott called a verbal troubleshooting test. In such a test, a problem is posed, and the student is asked to state each successive action he or she would take. The experimenter gives the result of the action,1 and the student then goes on to the next action, eventually stating his diagnosis for the machine problem that was posed. To test for transfer, a new machine was designed that could be diagnosed using the same principles but that had different kinds of functions as part of its design, along with some differences in electronics and basic operations. We called this mythical system the Frankenstation. It had the same family resemblance to the domain of original training as several actual machines, but by using an imaginary machine, we could control for any prior experience that a few students might have had with other real machines. The basic results were that students trained on Sherlock 2 diagnosed both the domain machine and the Frankenstation much better than controls and almost as
Nature and Methods of Learning by Doing
13
well as senior experts. So, we conclude that the approaches used in Sherlock and Sherlock 2 are effective and do produce transfer (see Gott & Lesgold, 2000, for details).
The Intel Experience After we finished the Air Force work, an opportunity arose to work jointly with Intel Corporation to extend the learning by doing approach, which we then called intelligent coached apprenticeship, to use in training technicians who maintain equipment used to make computer chips. This allowed us to learn how to make learning by doing as economically efficient as possible and also to extend some of the transfer-related affordances we experimented with in the Sherlock 2 effort. I learned a lot from this experience, in many different ways. By being part of a joint working team, partly at the University of Pittsburgh and partly at Intel, I got to experience some of the best of modern business organization and practice. The team included people ranging from technicians with work experience but no more than two years of formal post-secondary education to people with doctorates. On any given day, the real planning and management of the project was led by whoever had the most useful information for the next steps. Technicians sometimes modified the project Gantt charts and working plan just like managers did, if they happened to have relevant information. The project was also remarkably free of individual ego, with everyone working toward shared goals. If a team member had a personal or family emergency, the work went on without interruption, with others filling in the missing contributions of whoever was temporarily hors de combat. My sense today of the new demands of the work world on our education system is driven substantially by the experiences I had with Intel and the universality of certain skill requirements in the work team, regardless of level of professional or technical education.
Nature and Methods of Learning by Doing
14
The Air Force differs from the business world in some important ways. Most important, they make decisions to be cost-effective, but without the added discipline of profit structures for their work. Some defense investments are so important, that we have to spend the money, regardless of how much it is. To be useful for the rest of the economy, an approach needs to be cost-effective within the scope of normal business operations, and it has to have a cost that is consistent with the decision making structure of everyday businesses. The first issue, general cost-effectiveness, is quite straightforward. Sherlock cost about $2.5 million, and Sherlock 2 cost about $1 million. We did two more iterations of learning by doing technology with Intel, and they did a fifth iteration on their own. While the exact cost figures are proprietary, I can report that the pattern of each generation costing less than 50% of its predecessor held up, more or less, across the entire series of implementations. The current cost for a learning by doing system produced by a practiced team is in five figures, and the return on investment for this type of training is huge, because the domains involved include problems that sometimes take hours or days to be solved and that hold up significant chunks of product manufacturing lines. However, there are two types of problems we have not yet overcome. First, businesses do not have reliable, replicated models of the learning curve for entering the learning by doing business. They can see the six or seven figure initial cost for the first effort, but are not confident that the second or third effort will be clearly practical. Until we in the psychology world do enough work of this kind to clarify the learning curve, companies will see a move into learning by doing as too risky. A second problem has to do with the structure of organizations and who within the organization has to take the risk. It is quite feasible to think of building
Nature and Methods of Learning by Doing
15
intelligent systems for learning by doing for under $100,000 and for those systems to produce returns on investment in the millions of dollars. However, training departments often operate with average budgets for a new “course” that are in the low five figure range. It is an act of courage for a training manager to spend two to three times as much money on an approach he or she is just starting to understand and with which there has been little departmental experience than is spent on projects that the department knows it can do well (by current standards). Either, we need to introduce significant training on developing learning by doing systems into the programs that produce training managers, or we will need to get the costs of such systems down even lower. The work with Intel also provided additional opportunities to refine our approach to training for transfer, largely because of one of those serendipitous experiences Skinner used to talk about. When Intel first raised the prospect of building an intelligent coached apprenticeship system, they considered several possible jobs for which they needed more training in complex problem solving. Eventually, they ended up developing the training for an ion deposition system, one that puts layers on chips. In our second effort, we ended up using an ion beam implant machine, one that writes circuit components on chips. Amazingly, it turned out that a physicist interested in science education who had been working with me on some youth apprenticeship approaches, happened to hold patents from other domains on both ion deposition and ion beam implant processes. This person, Martin Nahemow, was the main source of a transferproducing approach we called the Process Explorer (see Lesgold and Nahemow, 2001 for details). The core of Nahemow’s thinking was that there are both categories of problem manifestations and categories of system failures that index much of the knowledge an expert accumulates about complex problem solving in a domain,
Nature and Methods of Learning by Doing
16
and that when a particular manifestation occurs, this is a perfect opportunity to learn more about the ways in which a certain kind of system failure leads to a certain kind of symptom. Computationally, this basic idea can readily be turned into a learning affordance for an intelligent learning by doing system. The first task is to produce the equivalent of a matrix in which the rows are system failure types and the columns are types of manifested failures. This would be a very big matrix, so big in fact that we would never store the whole thing, since most of the cells would be empty. Displaying such a matrix to a student would be worse than useless – it would be unnecessarily confusing. However, only a small piece of the matrix is relevant to any given problem, and that is exactly the piece that can be a useful source of transfer-related knowledge that is anchored in the problem itself. Here’s how it works. Suppose that a particular problem that is presented to a student involves a manifestation like “sandy grit is being deposited on chip wafers during the manufacturing process.” One could then select all of the system failures that can produce grit. This defines the rows of a submatrix that could be useful. We then select all the manifestations that (a) can be produced by the subset of system failures, and (b) can be explained with a simple functional relationship. These define the columns of the submatrix. With appropriate criteria for relevance, one generally ends up with a matrix of perhaps half a dozen rows and roughly as many columns, which is quite tractable. Since most manifestations in a manufacturing process are a deviation, upward or downward, from a quality standard (statistical process control drives most productive manufacturing processes, and the manifestations of system failure are usually deviations of quality parameters), the cells in such a matrix can have markers like ↓↓ or ↑ to indicate the direction and magnitude of deviation. Figure 2 shows an example of the Process Explorer as presented to a student, and Figure 3 shows the
Nature and Methods of Learning by Doing
explanation of the relationship selected by the student in Figure 2 (the cell with black and white reversed). Figures 2 and 3 about here The Process Explorer also solves a major problem in technical education. A machine like those used at Intel requires many areas of knowledge to be understood. As educators, we often approach such situations by listing all the domains of knowledge that are needed and then just telling the student to take courses in all that stuff. It does not make sense to tell a student with at most two years of post high school education that he needs courses in physics, physical chemistry, chemical engineering, electrical engineering, and quantum mechanics before he has met the prerequisite for job-specific training. Nor is it possible to replace all this disciplinary knowledge with a set of rote-learned rules. The dilemma we face is that technical people need real scientific knowledge to underpin their work skills but they cannot spend years acquiring it. The learning by doing approach offers a way to select bite-sized pieces of this knowledge and make them available in the specific contexts for which they are needed (see also Bonar, 1986). Through vehicles like the Process Explorer, it is quite possible to insert relevant conceptual presentations and explanations into learning by doing, and it can be done dynamically and under control of the learner. This is one way – and I am sure there are others – of realizing the idea of learning by doing and the idea of learner-controlled learning in a disciplined and powerful way. We know that this approach works. It has been evaluated (see Lesgold & Nahemow, 2001). What remains to be established is the specific range of learning contexts for which it works – we only looked at complex problem solving in technical domains.
17
Nature and Methods of Learning by Doing
18
Learning from Conversations In the past few years, my work has moved in a somewhat different direction, about which I comment only briefly, because the work is not yet finished, and it is not certain exactly how it will play out. While many opportunities for learning are grounded in one’s personal and immediate work experiences, others can be grounded in situations that one might study. Just as the good student studies worked out examples in a physics book (Chi et al., 1989), it is possible for students to have conversations about what they observe in a situation. An example of this colleagues and I currently are exploring is discussion among teachers about an example of teaching they have just observed. While our theoretical work is independent, Dan Suthers and I have been working with a team including Daniel Jones, Amy Soller, Megan Hall and Lauren Resnick to develop web-based conversational environments in which teachers can learn by discussing specific video examples of teaching. The system being produced will be the primary learning environment for the Institute for Learning, a University of Pittsburgh effort in professional development of teachers and school leaders. The trick, when learning by doing is not guided by a problem that the learner is trying to solve, is to guide the observations of cases and their discussion among learners. This will be done in two ways. First, there are rubrics that shape the discussions, and often these rubrics are first used in an in-person discussion facilitated by instructional experts. Then, after a structured introduction, students can discuss the cases they have observed. This discussion, when it takes place on a web site, can, in principle, be coached by an intelligent system, though much work is still needed to determine how such systems might work. My colleague Dan Suthers (2001a, 2001b), now at the University of Hawaii, has been addressing a number of important questions about collaborative learning
Nature and Methods of Learning by Doing
19
over web sites, including how to diagrammatically represent an ongoing discussion in ways that assure shared meaning among the discussion’s participants, how to help discussions “point” to multimedia sources, and how to help groups of learners keep track of all that is being discussed so they can integrate and structure what they have learned. Taking a somewhat different approach, my doctoral student Amy Soller is trying to determine how an intelligent system could recognize, from the sequence of speech acts in a discussion, whether learning is proceeding effectively and whether any one of the participants seems less likely to really be learning (cf. Soller, in press; Soller & Lesgold, 2000). So far, she has had some success applying a hidden Markov modeling scheme to the sequence of speech acts in an online discussion. When one takes a set of network conversations and classifies them according to whether the discussants successfully learned to carry out a task together, it turns out that one can train a hidden Markov model2 that is quite successful at distinguishing the success of additional conversations not used to train the model. The next step is to see whether collections of conversations manifesting particular coachable problems can be distinguished using this approach. Final Comment I have had unbelievably good fortune in my career – good mentors, good students, good collaborators – and all have contributed to whatever I may have achieved. I hope that is clear from this brief presentation. There is another message that I hope I have conveyed. This is the potential for a field of cognitive engineering that can be helpful to our society and to the world. However, we will not have such a field unless we encourage our students to acquire enough mathematics, science, and formal skills to be able to work with and seek out the complementary expertises they will need to be true cognitive engineers. Not all
Nature and Methods of Learning by Doing
psychology students will aspire to an engineering career, but given the range of knowledge needed in many areas of psychological research, none will be hurt by being encouraged to seek strong mathematical, formal-symbolic, and scientific background as part of their initial liberal educations. Had I not had teachers, family, and friends who embodied this advice, I would not have been able to do the work that APA decided to honor. References Anderson, J. R., Corbett, A. T., Koedinger, K., & Pelletier, R. (1995). Cognitive tutors: Lessons learned. The Journal of Learning Sciences, 4 (2), 167-207. Bonar, J., R. Cunningham, and J. Schultz, (1986). An Object-Oriented Architecture for Intelligent Tutoring. Proceedings of the ACM Conference on Object-Oriented Programming Systems, Language and Applications. New York: ACM. Carbonell, J. (1986) Derivational analogy: a theory of reconstructive problem solving and expertise acquisition. In R.S. Michalski, J.G. Carbonell, T.M. Mitchell (eds.): Machine Learning - An Artificial Intelligence Approach, Vol.II, Morgan Kaufmann, pp. 371-392. Chi M.T.H., Bassok, M., Lewis, M.W., Reimann, P. & Glaser, R. (1989) SelfExplanations: How Students Study and Use Examples in Learning to Solve Problems. Cognitive Science, 13, 145-182. Corbett, A. T., Koedinger, K. R., & Anderson, J. R. (1997). Intelligent tutoring systems. In Helander, M. G., Landauer, T. K., & Prabhu, P. V. (Eds.), Handbook of Human-Computer Interaction, (pp. 849-874). Amsterdam, The Netherlands: Elsevier Science B. V.
20
Nature and Methods of Learning by Doing
21
Gott, S. P. (1987). Assessing technical expertise in today’s work environments. Proceedings of the 1987 ETS Invitational Conference (pp. 89-101). Princeton, NJ: Educational Testing Service. Gott, S. P., Bennett, W., & Gillet, A. (1986). Models of technical competence for intelligent tutoring systems. Journal of Computer-Based Instruction. 13, 4346. Gott, S. P., & Lesgold, A. M. (2000). Competence in the Workplace: How Cognitive Performance Models and Situated Instruction Can Accelerate Skill Acquisition. In R. Glaser (Ed.), Advances in instructional psychology. Hillsdale, NJ: Erlbaum. Kolodner, J. (1993). Case-Based Reasoning. San Mateo, CA: Morgan Kaufmann. Kolodner, J. (1994). From Natural Language Understanding to Case-Based Reasoning and Beyond: A Perspective on the Cognitive Model That Ties It All Together. In R. C. Schank (Ed.), Beliefs, Reasoning and Decision Making (pp. 55-110). Hillsdale, NJ: Lawrence Erlbaum Associates. Laird, J.E., Newell, A., & P.S. Rosenbloom. (1987). Soar: An architecture for general intelligence. Artificial Intelligence, 33, 1-64. Leake, D. (1996). Case-Based Reasoning: Experiences, Lessons, and Future Directions. Menlo Park, CA: AAAI Press/MIT Press. Lesgold, A. M., Lajoie, S. P., Bunzo, M., & Eggan, G. (1992). SHERLOCK: A coached practice environment for an electronics troubleshooting job. In J. Larkin & R. Chabay (Eds.), Computer assisted instruction and
Nature and Methods of Learning by Doing
22
intelligent tutoring systems: Shared issues and complementary approaches (pp. 201-238). Hillsdale, NJ: Lawrence Erlbaum Associates. Lesgold, A., & Nahemow, M. (2001). Tools to assist learning by doing: Achieving and assessing efficient technology for learning. In D. Klahr & S. Carver (Eds.), Cognition and instruction: Twenty-five years of progress. Mahwah, NJ: Erlbaum. Means, B., & Gott, S. P. (1988). Cognitive task analysis as a basis for tutor development: Articulating abstract knowledge representations. In J. Psotka, D. Massey, and S. Mutter (Eds.). Intelligent tutoring systems: Lessons learned (pp. 35-58). Hillsdale, NJ: Erlbaum. Schank, R. C., & Cleary, C. (1995). Engines for Education. Mahwah, NJ: Erlbaum. Soller, A.L. (in press). Supporting Social Interaction in an Intelligent Collaborative Learning System. International Journal of Artificial Intelligence in Education, 12. Soller, A., & Lesgold, A. (2000, November). Modeling the Process of Collaborative Learning. Paper presented at the International Workshop on New Technologies in Collaborative Learning, Awaji-Yumebutai, Japan. Suthers, D. (2001a). Architectures for Computer Supported Collaborative Learning. To appear in Proceedings of the IEEE International Conference on Advanced Learning Technologies (ICALT 2001), 6-8 August 2001, Madison, Wisconsin. Suthers, D. D. (2001b). Towards a Systematic Study of Representational Guidance for Collaborative Learning Discourse. Journal of Universal
Nature and Methods of Learning by Doing
Computer Science, 7 (3). Electronic publication: http://www.jucs.org/jucs_7_3/towards_a_systematic_study Thorndike, E. L. & Woodworth, R. S. (1901). The influence of improvement in one mental function upon the efficiency of other functions (I). Psychological Review, 8, 247-261. Viterbi, A. J. (1967). Error bounds for convolutional codes and an asymptotically optimal decoding algorithm. IEEE Transactions in Information Theory, IT-13, 260-269. Whitehead, A. N. (1929). The Aims of Education and Other Essays. New York: Macmillan.
23
Nature and Methods of Learning by Doing
24
Figures
Figure 1. Example of Transfer-Related Coaching.
Nature and Methods of Learning by Doing
Figure 2. Process Explorer Example.
25
Nature and Methods of Learning by Doing
Figure 3. Example explanation for a cell of the process explorer.
26
Nature and Methods of Learning by Doing
27
Footnotes
1
We went to this approach initially because our control groups did not have experience using the tutor. Consequently, if we had used the tutor to present test problems, we would not have known whether differences were due to unfamiliarity with the interface or to actual differences in troubleshooting capability.
2
A Markov model is one that is completely specified by listing its states, an identifiable observable outcome corresponding to each state, and probability of getting from each state to any of the others that can be reached directly. A hidden Markov model is one whose states have specific probabilities of generating any given observable outcome, rather than always having 100% probability of generating a specific outcome. An algorithm is available that determines the best fitting hidden Markov model for a process given a set of sequences generated by that process (Viterbi, 1967).
Nature and Methods of Learning by Doing
28
Brief Biography of Alan Lesgold Alan Lesgold received his Ph.D. in psychology from Stanford University in 1971, where he was a student of Gordon Bower (the other members of his doctoral committee were Richard Atkinson and Herbert Clark). He joined the Learning Research and Development Center and Department of Psychology at the University of Pittsburgh that same year. He worked with Charles Wrigley on a variety of large-scale multivariate data analysis projects as an undergraduate at Michigan State. This allowed him to develop his psychological knowledge at the same time as he was learning to use information technologies, and it also provided his first teaching opportunity, helping with a class on computer programming for high school students in 1964, one of the first such classes to be offered. Lesgold credits Robert Glaser, Lauren Resnick, and Arthur Melmed as important mentors in his post-doctoral career. Currently at Pitt, he is Professor and Dean of the School of Education, Professor of Psychology and Intelligent Systems, and Senior Scientist in the Learning Research and Development Center. His current research interests are uses of digital video and web-based discussions in teaching and learning. Until July 2000, he served as executive associate director of the Learning Research and Development Center at the University of Pittsburgh. Lesgold founded and initially directed Pitt’s interdisciplinary doctoral program in cognitive science and artificial intelligence. He holds an honorary doctorate from the Open University of the Netherlands and is a fellow of the Divisions of Experimental, Applied, and Educational Psychology of the American Psychological Association, a fellow of the American Psychological Society, and a past president of the Society for Computers in Psychology. He was Secretary/Treasurer of the Cognitive Science Society from 1988 to 1997 and continues to serve on its Board of Governors. In 1995, he was awarded the Educom Medal by Educom and the American Psychological Association for contributions to educational technology.
Nature and Methods of Learning by Doing
29
Lesgold served on the National Research Council Board on Testing and Assessment from 1993 through 1998 and chaired the Board’s Roundtable on Schooling, Work, and Assessment. Previously, he served on the personnel performance panel of the National Research Council Committee on Strategic Technologies for the Army and the NRC review committee for the Army Research Laboratory. He served on two Congressional Office of Technology Assessment advisory panels and was the chair of the Visiting Panel on Research of Educational Testing Service. In the 1970’s, Lesgold slowly moved his work from laboratory studies of memory to experimental work on reading comprehension and its acquisition. In writings with Charles Perfetti, he addressed reading as a flexible form of expertise and discussed trade-offs among word recognition skills, prior domain knowledge, and general comprehension skills. For the remainder of his career, he has dealt, one way or another, with learning by doing and how it can be facilitated. In the early 1980’s, Lesgold published articles on the acquisition of expertise and complex skills in medicine and technical domains. Then, he undertook to translate his and other findings into a useful technology of intelligently coached instruction. One result was Sherlock – an electronic learning environment for learning to troubleshoot complex electronic equipment. Sherlock provided learners with real-life cases, and guided them through their learning process by “coached apprenticeship”. In the late 1980’s and early 1990’s, Lesgold continued his work on electronic apprenticeship environments. But now, the focus was more on the assessment of complex performances, on the authentic measurement of job performance, and on the design of intelligent systems for testing. An early advocate – following Robert Glaser – of full integration of teaching and testing, he stressed the
Nature and Methods of Learning by Doing
30
importance of context-specific feedback for the development of competencies. Lesgold and colleagues developed the technology of intelligently coached learning by doing over the period from 1986 to the 1999, with sponsorship by the U. S. Air Force, U S WEST, and Intel Corporation. It is this work that is the basis for the present article. More recently, he and colleagues also developed a technology for supporting rich collaborative engagement of students and professionals with complex issues and complex bodies of knowledge. This work, currently focused on teacher professional education, previously was sponsored by the National Science Foundation, the Defense Advanced Research Projects Agency, the President’s Technology Initiative, and currently is funded by the U. S. Department of Education. The World Bank sponsored related work. Over the course of his career, Lesgold has collaborated on written work with Bruce Bender, B. Berardi, A. Block, Jeff Bonar, Gordon Bower, John Seely Brown, Marilyn Bunzo, Violetta Cavalli-Sforza, Susan Chipman, Kwang-Su Cho, Bill Clancey, Michal Clark, John Connelly, Mary Beth Curtis, Hildrene De Good, Sharon Derry, Gary Eggan, Paul Feltovich, Mike Feuer, Robert Fitzhugh, Sipke Fokkema, Jim Fox, Claude Frasson, Gareth Gabrys, Gilles Gauthier, Dedre Gentner, Morton Anne Gernsbacher, Robert Glaser, Susan Goldman, Roberta Golinkoff, Maria Gordin, Sherrie Gott, Linda Greenberg, J. Guttman, Kathleen Hammond, Nira Hativa, Ted Hughes, Joyce Ivill, Rob Kane, Sandra Katz, William Keith, Dale Klopfer, Susanne Lajoie, Clayton Lewis, Joel Levin, Debra Logan, Heinz Mandl, Sr. Claire McCormick, Martin Nahemow, Luis Osin, Massimo Paolucci, Charles Perfetti, Dan Peters, Mitch Rabinowitz, Govinda Rao, Fred Reif, Lauren Resnick, Steve Roth, Harriet Rubinson, Osnat Sarig, Mark Seidenberg, Colleen Seifert, Mike Shafto, Joseph Shimron, Elliott Soloway, Dan
Nature and Methods of Learning by Doing
31
Suthers, David Tieman, Eva Toth, Joe Toth, Gregg Vesonder, Yen Wang, Arlene Weiner, David Winzenz, and Dick Wolf. Selected Bibliography of Alan Lesgold, Chronologically Ordered Bower, G.H., & Lesgold, A.M. (1969). Organization as a determinant of part-to-whole transfer in free recall. Journal of Verbal Learning and Verbal Behavior, 8, 501-506. Lesgold, A.M. (1972). Pronominalization: A device for unifying sentences in memory. Journal of Verbal Learning and Verbal Behavior, 11, 316-323. Lesgold, A.M., & Goldman, S.R. (1973). Encoding uniqueness and the imagery mnemonic in associative learning. Journal of Verbal Learning and Verbal Behavior, 12, 193-202. Lesgold, A.M., & Perfetti, C.A. (1978). Interactive processes in reading comprehension. Discourse Processes, 1, 323-336. Lesgold, A.M., Roth, S.F., & Curtis, M.E. (1979). Foregrounding effects in discourse comprehension. Journal of Verbal Learning and Verbal Behavior, 18, 291-308. Lesgold, A.M. (1984). Acquiring expertise. In J. R. Anderson & S. M. Kosslyn (Eds.), Tutorials in learning and memory: Essays in honor of Gordon Bower. San Francisco, W. H. Freeman. Lesgold, A.M., Resnick, L.B., & Hammond, K. (1985). Learning to read: A longitudinal study of word skill development in two curricula. In G. Waller and E. MacKinnon (Eds.), Reading Research: Advances in Theory and Practice. New York: Academic Press.
Nature and Methods of Learning by Doing
32
Lesgold, A., Rubinson, H., Feltovich, P., Glaser, R., Klopfer, D., & Wang, Y. (1988). Expertise in a Complex Skill: diagnosing X-ray Pictures. In Chi, M. T. H., Glaser, R., and Farr, M. (Eds.), The nature of expertise. Hillsdale, NJ: Erlbaum. Glaser, R., Lesgold, A., & Lajoie, S. (1987). Toward a Cognitive Theory for the Measurement of Achievement. In R. R. Ronning, J. Glover, J. C. Conoley, & J. C. Witt. The influence of cognitive psychology on testing. Hillsdale, NJ: Erlbaum. Lesgold, A. M., Lajoie, S. P., Bunzo, M., & Eggan, G. (1992). SHERLOCK: A coached practice environment for an electronics troubleshooting job. In J. Larkin & R. Chabay (Eds.), Computer assisted instruction and intelligent tutoring systems: Shared issues and complementary approaches (pp. 201-238). Hillsdale, NJ: Lawrence Erlbaum Associates. Lesgold, A. (1994). Assessment of intelligent training technology. In E. Baker and H. O'Neil, Jr. (Eds.), Technology assessment: In Education and Training (vol. 1) pp. 97-116. Hillsdale, NJ: Lawrence Erlbaum Associates. Lajoie, S. P., & Lesgold, A. M. (1992). Dynamic assessment of proficiency for solving procedural knowledge tasks. Educational Psychologist, 27, 365384. Katz, S., & Lesgold, A. (1993). The role of the tutor in computer-based collaborative learning situations. In S. Lajoie & S. Derry (Eds.), Computers as cognitive tools (pp.289-317). Hillsdale, NJ: Lawrence Erlbaum Associates.
Nature and Methods of Learning by Doing
33
Lesgold, A. (1993). Information technology and the future of education. In S. Lajoie & S. Derry (Eds.), Computers as cognitive tools (pp. 369-383). Hillsdale, NJ: Lawrence Erlbaum Associates. Lesgold, A., Katz, S., Greenberg, L., Hughes, E., & Eggan, G. (1992). Extensions of intelligent tutoring paradigms to support collaborative learning. In S. Dijkstra, H. P. M. Krammer, & J. J. G. van Merrienboer (Eds.), Instructional models in computer-based learning environments, pp. 291-311. Berlin: Springer-Verlag. Lesgold, A. (1996). Quality control for educating a smart work force. In L. B. Resnick, J. Wirt, (Eds.), Linking school and work: Roles for standards and assessment, (pp. 147-191). San Francisco: Jossey-Bass. Derry, S. P., & Lesgold, A. M. (1996). Toward a situated social practice model for instructional design. In R. Calfee & D. Berliner (Eds.), Handbook of Educational Psychology. New York: Macmillan. Osin, L., & Lesgold, A. (1996). A proposal for the reengineering of the educational system. Review of Educational Research, 66(4), 621-656. Gott, S. P., & Lesgold, A. M. (2000). Competence in the Workplace: How Cognitive Performance Models and Situated Instruction Can Accelerate Skill Acquisition. In R. Glaser (Ed.), Advances in instructional psychology. Hillsdale, NJ: Erlbaum. Lesgold, A., & Nahemow, M. (2001). Tools to assist learning by doing: Achieving and assessing efficient technology for learning. In D. Klahr & S. Carver (Eds.), Cognition and instruction: Twenty-five years of progress. Mahwah, NJ: Erlbaum.
Nature and Methods of Learning by Doing
34