454
BOOK REVIEWS
Timothy R. Colburn, Philosophy and Computer Science, Explorations in Philosophy Series, New York: M.E. Sharpe, 2000, xi + 243, ISBN 1-56324-991-X. Though the combination Philosophy and Computer Science may evoke a chuckle now and then at a cocktail party, as noted by the author at the beginning of this book, it is not such an odd combination after all. Come to think of it, there are philosophies of art, biology, mathematics, science, and so on, so why not computer science as well. Moreover, as philosophy involves ‘love and pursuit of wisdom by intellectual means’ and ‘inquiry into the nature of things based on logical reasoning rather than empirical methods’, among other things, it can certainly be combined with computer science to inquire about what is computer science, and what can it tell us about the nature of things. There is a reciprocal effect also in that computer science creates new things that did not exist before, and an inquiry into their nature can alter our concept of familiar things. In all these respects, Colburn’s book makes a significant foray. After the introductory first chapter, the remaining 11 chapters are grouped into three parts. The first two parts are concerned with Philosophy and Artificial Intelligence (AI henceforth). This is an area that has already seen quite some controversy, though Colburn introduces a novelty in considering how AI might influence epistemology. The third part of the book examines a more fundamental question of what is Computer Science: Is it a branch of mathematics or is it an empirical science? As with all philosophical questions, though the answer seems to recede further the more we examine it, an analysis of various related issues does nothing but deepen our understanding of Computer Science. I will now examine in turn these two facets where Computer Science and Philosophy osculate.
1. Philosophy and Artificial Intelligence As mentioned above, this takes up the first two parts of the book. Part I, ‘Philosophical Foundations of Artificial Intelligence’, covers three chapters. Chapter 2 is a very brief (five-and-a-half pages) overview of different areas of AI. The next two chapters are quite interesting in that they present a retrospective history of philosophy (an accelerated one) from the point of view of AI. As one would expect, the focus here is on the mind-body problem, which lies at the heart of the contemporary debate on what it means to have concepts, be intelligent, and so on. Starting from Plato’s dialogue to justify the existence of the ‘soul’ (which is identified with the mind), thru Descartes’ dualism, Hobbes’ ratiocination (which Hobbes himself identifies with computation), Leibniz’s preestablished harmony, which allows body and mind (soul) to coexist in sort of parallel planes in harmony with each other, thru the positivism of the twentieth century to Ryles’s idea that this dualism arises from a mistaken category application, as in including the concept of a parade among the objects like marching bands that make up a parade. We should
BOOK REVIEWS
455
take note of Leibniz’s preestablished harmony here because it is used later in the third part of the book to solve a similar dualism that arises in computer science. The second part of the book ‘The New Encounter of Science and Philosophy’, covers chapters five through eight. The novelty of the meeting point between Science and Philosophy alluded to lies in the reciprocal effect of science on philosophy. Of course, philosophers of science have to take into account the recent and not-so-recent developments in science, and in that sense Science has an obvious impact on Philosophy (see, for instance, Kuhn, 1970; or Popper, 1965). But Colburn is referring to a different kind of effect, one that introduces an empirical element to philosophy: “With computers, however, it became possible to test one’s epistemological theory if the theory was realizable in a computer model” (p. 9, emphasis original). This ‘naturalization’ of philosophy by computer modeling, something that is coming in vogue among certain philosophers in recent years, remains a weak point of this book, in my opinion, but to realize it, let us first trace Colburn’s arguments. As evident in the quote above, Colburn illustrates how Computer Science makes it possible to make Philosophy similar to Natural Sciences by focusing on epistemology and rationality, which are concerned with the study of human knowledge and reasoning. As logic underlies reasoning, Chapter 5 starts with a brief review of the history of logic, through Boole and De Morgan’s propositional calculus to Frege’s predicate calculus, and then goes on to discuss some of the early logic-based AI programs like GPS and ELIZA that were essentially symbol processing systems. At the end of the chapter, the limitations of logic-based approaches to modeling human commonsense reasoning, including the so-called ‘frame problem’, are briefly presented. Chapter 6, ‘Models of the Mind’, provides a glimpse of what is perhaps the most controversial topic in Philosophy and AI that is concerned with issues like what is ‘mind’, when can an agent be said to possess ‘meanings’, what is ‘understanding’, and so on. On one hand, there are skeptics such as Fetzer and Searle, who deny ‘mentality’ to computational systems on the ground that their concepts are not connected to the referents through causality. On the other hand, there is the ‘functionalism’ championed by many (though not all) AI researchers, which essentially maintains that if a computational system displays the same input-output behavior as a human, then there is no reason to deny it ‘mentality’ or ‘understanding’. For example, the well-known Turing’s test for intelligence is strictly functional, but Searle’s Chinese room argument purports to show that a system can exhibit acceptable input-output behavior without any understanding of the symbols mediating the behavior. Connectionism and its success with low-level perceptual tasks like face recognition seem to suggest such systems may be able to evade the causality argument, but, Colburn points out that they suffer from “an epistemological poverty of not explicitly representing the inferential transformations inherent in all reasoning” (p. 86). In particular, connectionist systems are not able to capture the ‘justification’ relation that is the cornerstone of rationality.
456
BOOK REVIEWS
In spite of the misgivings about functionalism and computational conceptions of the mind, Colburn argues, at least for some mental processes that are primarily concerned with symbol-based transformations, AI systems might provide a testbed for philosophy. One such instance is that of ‘rational reasoning’, and the next two chapters of the book are devoted to illustrating how AI models of reasoning can be used to empirically test epistemological theories. Chapter 7 provides us with an overview of different models of reasoning, focusing on ‘defeasible reasoning’, which is also known as ‘default’ or ‘non-monotonic’ reasoning. The main characteristic of such reasoning is that conclusions reached earlier can be retracted (or ‘defeated’) when more information is received later. The proposal to add an empirical dimension to epistemology is laid out in Chapter 8. The example used is that of heuristic search. Heuristics are rules of thumb that are designed to guide a system (or an agent) efficiently through a large search space towards their goals. Heuristics are fallible in that they are not guaranteed to work, and might sometimes even be counter productive, but a good heuristic is effective most of the time. Most of the chapter is devoted to explaining, with a detailed example, what heuristics are and how they can be used to model defeasible reasoning. How heuristic search helps to provide an empirical test-bed for epistemology is discussed in the first four pages and the last two pages of this chapter. Colburn argues, “epistemology can still be normative, because the patterns of reasoning we build into models are motivated by how we think a rational agent ought to reason” (p. 108); “study of heuristics is an example of a confluence of artificial intelligence and epistemology, since heuristics can be regarded as justifying the actions and beliefs of problem-solving agents” (p. 124); and “epistemological theories, because their subject matter is human knowledge, are subject to the testing and falsifying that attends the study of any natural artifact, since human knowledge can be modeled in computer programs” (p. 125). It is this leap from the implementation of heuristic search to the naturalization of epistemology that I find quite a bit lacking in substance. Though various objections can be raised against it, I will briefly mention just one fundamental weakness. As acknowledged by Colburn, epistemology is primarily concerned with characterizing what is knowledge, which requires that an agent must have a belief, it must be true, and the agent must have an appropriate justification for believing it. (Notice that the justification must be ‘appropriate’ in order to avoid Gettier’s paradox.) So a major theme for epistemology is what constitutes ‘an appropriate justification’. Now, we can, of course, design or think up different criteria of justification (namely, heuristics), implement various artificial agents based on these criteria, and observe their behaviors. These observations, in turn, may lead us to modify our initial criteria, and implement other systems. All this sounds nice, and plausible with some imagination. But then where is the fish? The book includes a long quote from Pollock on page 103 to argue that building computer implementations has a reciprocal effect on theories, but instead what would have helped Colburn’s case a lot more is a couple of concrete examples showing this reciprocal effect. The
BOOK REVIEWS
457
example of heuristic search does not really help because it does not show any reciprocal effect of implementation on epistemology. My feeling is that this case for adding an empirical component to epistemology is somewhat overblown, and the reason that the book does not contain any concrete examples is because no such significant examples exist. Although it may be too much to expect something like the Michelson–Morley experiment in epistemology, one still needs to see some real evidence where a significant epistemological issue is settled by computer implementation, before such a claim can be taken seriously. 2. Philosophy of Computer Science The last four chapters of the book are devoted to a discussion of some issues concerning the fundamental nature of computer science. The discipline of computer science is rather like mathematics in some ways, and is like an empirical science in other ways. (It is interesting to note that if we trace the historical origins of the computer science departments across various universities, we will find that some departments started out as a part of the mathematics department, while others as a part of the electrical engineering department). Chapters 9 and 10 examine this dual nature of computer science. The basis for considering computer science a branch of mathematics is the view that any program’s outcome can be predicted (in principle) from an analysis of the algorithm itself, without any empirical testing. This statement seems quite reasonable because programs are written in a formal language with a clearly defined syntax and semantics. But Colburn’s analysis, using the example of a simple program to compute the factorial of a given number, reveals many implicit defeasible assumptions underlying this view. The main thrust of Colburn’s counterargument, which is largely based on Fetzer’s work on undermining program verification (Fetzer, 1988), is that programs are codes running on physical machines. So even though we are able to predict the outcome of an abstract program (an algorithm) running on an abstract machine, to say something about the same program running on a real computer based on this prediction requires a horde of other assumptions about the program being compiled correctly, the hardware functioning correctly, and so on. Though this illustrates what one commentator on Fetzer’s original article called the ‘burnt cookie’ principle (Conte, 1989), namely that no matter how good a cookie recipe is, if the oven thermostat is broken the cookies will get burnt, putting too much weight on it leads us to a kind of Humean skepticism. If we can study abstract programs as mathematical objects, study their properties and predict their outcomes (on abstract machines, of course), it certainly increases our understanding of and faith in those programs running on real machines. It is in this sense that I believe the mathematical aspects of computer science are useful. The empirical nature of computer science is rather obvious, as a program can be executed and its behavior observed. This activity is referred to as ‘testing’ and sharply distinguishes computer science from mathematics. In Colburn’s analysis, five different senses of ‘testing’ are distinguished and discussed:
458
BOOK REVIEWS
1. Running a program to see if it conforms to its specifications. (p. 166) 2. Running a test program to see if the hardware is functioning correctly. (p. 166) 3. Running a program to see if the algorithm it is based on solves a problem. (p. 169) 4. Running a program to see if the program specifications on which the program is based on solve a problem. (p. 170) 5. Running a program to test a hypothesis in a computer model of reality. (p. 171) Chapter 11 is concerned with the role of ‘abstraction’ in computer science. After presenting a brief history of software engineering and the evolution of abstract data types, Colburn argues that the abstraction in computer science is different from that in mathematics. Citing Carnap, Cohen and Nagel, and Hempel, it is argued that the abstraction in mathematics consists of shedding the meaning or the content. In contrast, abstraction in computer science involves enlarging the content in the sense that an abstract data type, for example, is more widely applicable. This conclusion, however, relies heavily on a play of words because we can also say that an abstract data type loses content in that the details of implementation are shed. Or in the example of matrix multiplication used to illustrate procedural abstraction (pp. 187– 188), we can also say that in the abstracted statement “ReadMatrix(A,n,m)” the contents concerning the order in which the matrix elements are read is eliminated. Also, for mathematics it is very debatable whether it really lacks content (see MacLane, 1986, for example). The last chapter of the book is devoted to examining the ontological status of ‘software’. The problem is introduced with an amusing case where the US State Department prohibited the export of a textbook on applied cryptography because it contained as an appendix a floppy disk that had the programs for encryption software on it. The textbook without the disk, however, was allowed to be exported even thought it contained the text of the same programs. The basis for this distinction was: “As text on the printed page, the software was simply a product of every American’s right to free speech. But as magnetic representations of the same software on a different medium, it qualified as part of a potentially dangerous mechanism and a threat to national security” (pp. 199–200). Technically, the distinction seems quite misplaced because actually the program in the text form is much more versatile than the floppy. For example, if it happened to be a floppy in the Macintosh format, then the program could not be read on a PC. On the other hand, the program text can be inputted and executed on a variety of machines running different operating systems. Also, as Colburn notes, if one used a scanner then the program text can also be loaded mechanically into the system, as with the floppy. Nonetheless, the example illustrates the dual nature of software: it is a piece of text in one respect, and a mechanism in another. How can these two aspects of software be reconciled? Colburn’s proposal is to introduce a distinction between the ‘medium of description’ and ‘medium of execution’. The former corresponds to the textual rep-
BOOK REVIEWS
459
resentation of the program. Here, an attempt is made to distinguish between the textual representation of a program and its magnetic representation on a floppy: “when you look at a printout of a program, you see a lot of statements written in a formal language. But when you hold the same program on a floppy disk in your hand, you feel the weight of a piece of a machine” (p. 201). I think that this distinction is misplaced and not necessary. For example, you can also feel the weight of the paper when you hold it in your hand, and the relevant information on the floppy is not in its weight. But the main thing is that at the level of description, a program, whether it is represented as text, as a magnetization pattern on a floppy, or as an electromagnetic wave pattern as it is transferred over a wireless network, is a static object. But when the program is executing on a computer, it is a causal dynamic system that can be considered a mechanism. Moreover, there is a correlation between the level of description and the level of execution that is the deliberate result of the design of the processor hardware and the system software. In effect, this is a manifestation of Leibniz’s principle of preestablished harmony mentioned above, except that because computers are artificial systems, we do not need to appeal to God as the bringer about of the preestablished harmony: it results from the deliberate efforts of the hardware and software engineers. The main value of this book lies not in the destinations reached, but in showing the scenery along the way. Even if one does not always agree with the author’s conclusions, the analysis of various foundational issues related to computer science reveals that most of them are not so cut and dry as many practitioners may take for granted. If the book provokes further debate and dialogue on these issues among computer scientists and philosophers, it would have served its purpose well. References Conte, Paul T. (1989), ‘More on Verification’, Communications of the ACM 32(7), p. 790. Fetzer, James H. (1988), ‘Program Verification: The Very Idea’, Communications of the ACM 31(9), pp. 1048–1063. Kuhn, Thomas S. (1970), The Structure of Scientific Revolutions, 2nd ed., Chicago: University of Chicago Press. Mac Lane, Saunders (1986), Mathematics: Form and Function, New York: Springer-Verlag. Popper, Karl R. (1965), Conjectures and Refutations: The Growth of Scientific Knowledge, 2nd ed., New York: Harper Torchbooks.
BIPIN INDURKHYA Department of Computer, Information and Communication Sciences Tokyo University of Agriculture and Technology Tokyo 184-8588, Japan E-mail:
[email protected]