Induction: Processes of Inference, Learning, and Discovery

4 downloads 23426 Views 415KB Size Report
... Computer Sciences Division,. UC Berkeley, Evans Hall, Berkeley, CA ... Steve Tanimoto, Dept. of Computer Science,. FR-35, University of Washington, Seattle, WA. 98195; (206) 543-1695. .... Models are best understood as assemblages of ...
Randy H. Katz, Computer Sciences Division, UC Berkeley, Evans Hall, Berkeley, CA 94720.

4$ Workshop on Computer Architecture for Pattern and Machine Intelligence, October 5-8, Seattle, Washington. Contact Steve Tanimoto, Dept. of Computer Science, FR-35, University of Washington, Seattle, WA 98195; (206) 543-1695.

Compsac 87, October 5-9, Tokyo, Japan. Contact Tosiyasu L. Kunii, Business Center for Academic Societies Japan, Yamazaki Bldg. 4F, 2-40-14, Hongo, Bunkyoku, Tokyo 113, Japan; phone 81 (3) 817-5831-or Stephen S. Yau, Northwestern Univer-sity, Dept. of Electrical Engineering and Computer Science, Evanston, IL 60201; (312) 491-3641. AAAIC 87-Aerospace Applications of Artificial Intelligence Conference, October 5-9, Dayton. Ohio. Contact Michael Johnston, BDM Corp., 1900 Founders Dr., Kettering, OH 45420; (513) 259-4434.

4$

4

IEEE International Conference on Computer-Aided Design, November 9-12, Santa Clara, California. Contact ICCAD-87 Secretary, Mentor Graphics Corp., 1940 Zanker Rd., San Jose, CA 95112; (408) 436-1500.

December 1987 4

Eighth Real-Time Systems Symposium,

December 1-3, San Francisco, California. Contact Kang G. Shin, Dept. of EE and

Computer Science, University of Michigan, Ann Arbor, MI 48109-1109.

January 1988 4$ Second International Conference on

Computer Workstations, January 31-February 3, Santa Clara, California. Contact Patrick Mantey, 335A Applied Science Bldg., Dept. of Computer Science, University of California, Santa Cruz, CA 95064.

IEEE International Workshop on Al

Applications to CAD Systems for Electronics, October 8-10, Munich, West Germany. Contact Helmuth Benesch, Siemens

AG, Otto-Hahn-Ring 6, 8000 Munich 83, West Germany; telephone (89) 636-46666.

Second Knowledge Acquisition for Knowledge-Based Systems Workshop, October 19-23, Banff, Canada. Contact John Boose, Advanced Technology Center, Boeing Computer Services, PO Box 24346, Seattle, WA 98124-or Brian Gaines, Dept. of Computer Science, University of Calgary, 3500 University Dr. NW, Calgary, Alta., Canada T2N IN4.

4

Third Annual Expert Systems in Government Conference (AIAA), October 19-23, Washington, DC. Contact Peter Bonasso, Mitre Washington Al Center, 7725 Colshire Blvd., MS W952, McLean, VA 22102; (703) 883-6908. 4$ FJCC 87, Fall Joint Computer Conference, October 25-29, Dallas, Texas.

Contact Computer Society of the IEEE,

FJCC 87, 1730 Massachusetts Ave. NW, Washington, DC 20036-1903; (202) 371-0101.

February 1988 4$ IEEE Compcon Spring 88, February 7 29-March 3, San Francisco, California. Contact Compcon Spring 88, 1730 Massachusetts Ave. NW, Washington, DC 20036-1903; (202) 371-0101.

March 1988 4

Fourth Artificial Intelligence Applications Conference, March 14-18, San Diego, California. Contact Al Conference, Computer Society, 1730 Massachusetts Ave. NW, Washington, DC 20036-1903; (202)

371-0101.

April 1988 4$ Compeuro 88, April 11-15, Brussels, Belgium. Contact Jacques Tiberghien, VRIJE Universiteit Brussels, Pleinlaan 2, 1050 Brussels, Belgium.

May 1988

November 1987 11th Symposium on Computer Applications

in Medical Care, November 1-4, Washington, DC. Contact SCAMC Secretariat, George

4

NCC 88-National Computer Confer-

ence, May 31-June 3, Los Angeles, California. Contact AFIPS, 1899 Preston White Dr., Reston, VA 22091; (703) 620-8900.

Washington University Medical Center, Office of Continuing Medical Education, 2300 K St. NW, Washington, DC 20037.

June 1988

Third Annual Conference on Artificial Intelligence for Space Applications, November 2-3, Huntsville, Alabama. Contact Thomas S. Dollman, NASA/EB4, MSFC, AL 35812; (205) 544-3823.

Third Conference on Man-Machine Systems (IFAC, IFIP, IEA, IFORS), June 14-16, 1988, Oulu, Finland. Contact Third MMS, The Finnish Society of Automatic Control, PO Box 165, SF-00101 Helsinki, Finland.

92

Book Reviews Induction: Processes of Inference, Learning, and Discovery John H. Holland, Keith J. Holyoak, Richard E. Nisbett, and Paul R. Thagard (MIT Press; Cambridge, Mass., 1986, 416 pp., $24.95, hardcover) In "The Current State of Al: One Man's Opinion" (AI Magazine winter/spring 1983), Roger Schank offered "a definition of Al that will disqualify most of its practitioners. . . Al is the science of endowing programs with the ability to change themselves for the better as a result of their own experiences. " People exhibiting such behavior are said to be engaged in induction, manifested by activities such as inductive inference, learning, and discovery. How people exhibit such behavior has greatly concerned both psychologists and philosophers. When addressing their concerns, more questions than answers arise. Neat and plain solutions (to paraphrase H.L. Mencken) have quickly revealed their inadequacies. Schank's manifesto can be viewed as a call to arms, rallying Al forces to face this imposing problem. It's not as if Al has ignored this topic. Marvin Minsky's bibliography for Computers and Thought devotes a whole category to "inductive inference machines," replete with several dozen citations. More recently, induction has become a major concern in machine learning. Unfortunately, these concerns have not yielded impressive fruit. In the following excerpt, Holland et al. explain why insights into induction have been so disappointing: . . . most treatments of the topic have looked at purely syntactic aspects of induction, considering only the formal structure of the knowledge to be expanded and leaving the pragmatic aspects, those concerned with goals and problem-solving contexts, to look out for themselves. In our view, this stance has produced little insight into the way humans do, or efficient machines might, make just the inferences that are most useful. This is not to say that syntactic considerations are irrelevant; indeed, at some level IEEE EXPERT

they are inescapable in any computational system. Our claim is simply that pragmatic considerations are equally inescapable. This observation serves as overture to their own study of induction. As the authors carefully emphasize, this book describes a computational framework for implementing induction. We should not confuse such a framework with an actual implementation. Rather, we should consider it a feasibility study indicating issues that must ultimately be addressed and suggesting potentially valuable exploration paths. We should not dismiss these suggestions as idle speculation, however; each is reinforced with behavioral evidence recorded by experimental psychologists. Thus, they can be considered desiderata of the following form: When humans exhibit induction, they also exhibit the following behavior; therefore, implementations of induction should exhibit the same behavior. Induction is not a traditional scientific-method exercise culminating in the validation of theory; the authors will not even go so far as to formulate a scientific theory. Their modesty and humility is revealed in the final chapter's epilogue- "Toward a Theory of Induction." They gather essentials from the preceding 11 chapters, suggest where the evidence is heading, and hope that further interdisciplinary research (clearly emphasizing "interdisciplinary") will lead to more concrete results. Such modesty should not turn prospective readers away from this book. Ultimately, more food for thought exists in the desiderata investigated than in the "syntactic" accounts of induction decried. Furthermore, principles behind the authors' framework are not particularly outrageous, they simply reflect basic "pragmatic considerations" the authors feel have been neglected: We view cognitive systems as constantly modeling their environments, with emphasis on local aspects that represent obstacles to the achievement of current goals. Models are best understood as assemblages of synchronic and diachronic rules organized into default hierarchies and clustered into categories. The rules comprising the model act in accord with a principle of limited parallelism, both competing with and supporting one another. Goal attainment often depends in part on flexible recategorization of the FALL 1987

environment combined with the generation of new rules. New rules are generated via triggering conditions, most of which are best understood as responses to the success or failure of current model-based predictions. The authors represent three departments at the University of Michigancomputer science, psychology, and philosophy-each of which has helped forge their framework. Each contribution was subject to review and interpretation by practitioners from the other disciplines, resulting in a gestalt whole that surpasses the sum of its parts. Let's consider the influence of each discipline. Computer science contributed two technologies, one well established and one emerging-the former, the production system; the latter, the body of connectionist models. The production rule is fundamental to the framework. The major criticism against rule-based architectures resides in the attention such architectures devote to deciding which rule to fire, as opposed to using rules as knowledge representations. For this reason, the framework introduces limited parallelism with competition and support among rules. Such competition and support exists in connectionist models, but this framework eschews the more massive parallelism of connectionism-a decision the authors base on evidence from experimental psychology. Results of such experiments constitute psychology's major contributionexperiments covering many instances of human behavior, as well as some of lower animals: conditioning, category formation, modeling the physical and social worlds, understanding of variability, inferencing based on statistics and deduction, and the use of analogy. In addition, the concept of a "mental model" (recently attracting considerable attention in cognitive science) is basic to the framework. Indeed, the framework can be regarded as proposing an architecture for mental models. Philosophy's contribution is the concern for basic epistemological issues, particularly addressing the nature of knowledge and explanation. The emphasis is on the philosophy of science, reviewing the framework in light of historical evidence detailing the formation of scientific theoriesparticularly fascinating since it demonstrates that philosophy, like psychology, provides experimental evidence vital to the resulting framework's strength. In fact, this examination of scientific theory culminates in a discus-

sion of how the framework may accommodate history recording the development of the wave theory of sound. One could enumerate at great length the many contributions of this book. Most important, however, Holland et al. directly and honestly confront a fundamental question-how experience contributes to intelligence. Of almost equal importance, this book presents an object lesson of how computer science's "mechanistic" materials can profitably be brought to bear on psychological and philosophical issues concerning great mysteries of the mind. Such a mechanistic approach is of particular interest in the attempt this book makes to harness intuitively appealing notions regarding mental models for the ultimate implementation of intelligent machines. Holland et al. provide some excellent counters to points raised by Hubert and Stuart Dreyfus in Mind Over Machine (reviewed for IEEE Expert by Lotfi Zadeh, summer 1987, p.1 I0- 11 1), a section of which appeared in our summer 1986 issue. A fundamental element of the Dreyfus argument, excerpted below, concerns thought's holistic nature:

Experimental psychologists have shown that people actually use images, not descriptions as computers do, to understand and respond to some situations. Humans often think by forming images and comparing them holistically. This process is quite different from the logical, stepby-step operations that logic machines perform. While this book cites no Dreyfus literature, Holland et al. are aware of these arguments (they base their discussion on the writings of Fodor and Quine). The preceding points are addressed and demonstrated as being either inaccurate or irrelevant criticisms of the proposed framework. Simply put, this book effectively demonstrates what happens when separate disciplines support rather than attack each other.

-Stephen W. Smoliar USC Information Sciences Institute 4676Admiralty Way, Ste. 1001 Marina del Rey, CA 90292 93

Suggest Documents