Off-Loading Memory to the Environment: A Quantitative ... - Springer Link

0 downloads 0 Views 48KB Size Report
Key words: bits, brain, environment, extended mind hypothesis, mathematics. 1. Background. R.W. Ashby (who wrote Ashby, 1956, 1960) pointed out many ...
Off-Loading Memory to the Environment: A Quantitative Example JOHN CASE Department of Computer and Information Sciences, University of Delaware, Newark, DE 19716, USA; E-mail: [email protected] Abstract. R.W. Ashby maintained that people and animals do not have to remember as much as one might think since considerable information is stored in the environment. Presented herein is an everyday, quantitative example featuring calculation of the number bits of memory that can be offloaded to the environment. The example involves one’s storing directions to a friend’s house. It is also argued that the example works with or without acceptance of the extended mind hypothesis. Additionally, a brief supporting argument for at least a form of this hypothesis is presented. Key words: bits, brain, environment, extended mind hypothesis, mathematics

1. Background R.W. Ashby (who wrote Ashby, 1956, 1960) pointed out many times in a graduate Cybernetics course I took from him in the late 1960s that people (and other animals) do not have to remember as much as one might think since considerable information is stored in the environment. There is related discussion about the memory load on ants compared to that on their environment in Simon (1981). There is further somewhat related discussion in Hutchins (1995) about cognition more generally. In particular, Hutchins (1995) focuses on cognition, including memory, for navigation on a naval ship, and maintains it is distributed among the human personnel, navigation equipment, and the environment external to the ship. 2. The Quantitative Example Here is, then, an everyday, quantitative example featuring calculation of the number bits of memory that can be off-loaded to the environment. Typically (correct) directions, along a series of roads, to, say, a friend’s house will provide an initial direction together with a sequence with entries such as and then you come to the Exxon station at which you should take a right. Suppose the initial direction is fixed. Suppose each landmark is unique (e.g. there is at most one Exxon station) and the path is loop-free. Then, for such sequences having n landmarks with corresponding binary choices (take a right or a left), one need remember only a mapping from landmarks to {right, left}. There are

Minds and Machines 14: 387–389, 2004. © 2004 Kluwer Academic Publishers. Printed in the Netherlands.

388

JOHN CASE

only 2n such mappings, so the information needing to be stored is a mere n bits. However, by a straightforward counting argument, the set of all such sequential directions numbers n! × 2n . Remembering any one of those, then, takes log2 (n!) + n bits.1 One needs to remember only n of these bits; hence, in effect, the environment stores, among other things, the residual log2 (n!) = (by formula (3.18), on p. 55 of Cormen et al. (2001)) (n log2 (n)) bits.2 Therefore, in my example, a not-so-savvy agent may store a whole non-repeating sequence of n landmarks each labeled with right or left. A savvy agent stores only one of the mappings from landmarks to {right, left} – which requires only n bits. In effect, the savvy agent places the memory load of the (n log2 (n)) bits, that the not-so-savvy agent might have additionally stored, on the environment. These extra bits which might be, but need not be stored by the agent are, in any case, stored in the environment. They are the bits for storing the sequence of the entries. Of course, the environment contains even more, for example, the whole spatial layout of the landmarks.3

3. Relation to the Extended Mind Hypothesis In this section I will first argue that my above quantitative example is independent of the extended mind hypothesis which allows flexibility concerning the conceptual placement of the boundaries of mind/cognition (including memory). Then (as a bonus so to speak) I will briefly argue for a form of the extended mind hypothesis anyhow, a form in which I talk about cognition (and not about mind). In an example from Clark and Chalmers (1998), Inge remembers in and recalls from Inge’s brain where the museum is, but Otto, an Alzheimer patient, records in and recalls from his notebook where it is. For my quantitative example above, each of Inge and Otto with his notebook would need to store only n-bits. Inge might do it in her brain and Otto with his notebook in Otto’s notebook. Inge or Otto with his notebook might be less savvy and store an additional (n log2 (n)) bits, Inge in Inge’s brain and Otto with his notebook in Otto’s notebook. My quantitative example works whether the bits are stored in a brain or a notebook. Of course if one disallows ones concept of cognition, including memory, to spill over into such devices as Otto’s notebook and forces it to be confined to the brain, then the team of Otto and his notebook is not doing cognitive memory. However, it seems to me human conceptualization involves mental cutting of nature at its (apparent) joints – for useful purposes, including for thought and communication. Imagine, if you will, a futuristic world containing an Otto with progressively more advanced Alzheimer’s disease and with his notebook replaced by successively more intelligent robots culminating in a highly intelligent robot with legal rights and a job involving its carrying Otto around to visit friends he no longer remembers but may still enjoy. Conceptually, it becomes progressively more useful to exclude Otto mostly or entirely from the boundaries of the navigation cognition required and to ascribe most or all of that cognition to the robot. At least, in between, when

OFF-LOADING MEMORY TO THE ENVIRONMENT

389

Otto is still cognitively contributing, it would seem useful to conceptualize Otto and the robot as a team (or part of a team) with the relevant cognition distributed between them.4 Acknowledgement The author received support from NSF grant CCR-0208616. Notes 1 It is useful to reckon memory size needed in a way that remains valid in all contexts. This is typically done by

calculating the number of bits (0’s and 1’s) required. For example, the standard character set called extended ascii has exactly 256 characters. The number of bits minimally required to store any one character of these 256 characters is, then, log2 (256), which = 8 bits, that is, one byte of memory. This number of bits crucially, minimally enables each possible extended ascii character to have a different code in memory from each other such character. If we only ever wanted to store the single character b, then the number of bits required would be log2 (1), which = zero bits. It is only when memory has to accommodate the possibility of storing any one of more than one possible thing that more than zero bits are required. Regarding the non-repeating sequences of n landmarks with each landmark labeled with right or left, there are n! × 2n possibilities. Here is why. The non-repeating sequences of n landmarks can be counted as follows. There are n choices for the first landmark, (n − 1) for the second, . . ., and 1 for the last. This yields a total of n! possible non-repeating landmark sequences. Each landmark in each such sequence is labeled with one of two possibilities: right or left. Hence, there are n! × 2n total possible non-repeating sequences of n landmarks with each landmark labeled with right or left. Similarly to the above analysis for extended ascii, we have, then, the number of bits required to store any one of the n! × 2n possible sequential directions is log2 (n! × 2n ) which, by standard logarithm identities, = log2 (n!) + n bits. 2 From p. 42 of Cormen et al. (2001), for functions f, g mapping, say, positive integers to non-negative reals, def

f(n) = (g(n)) ⇔ (∃ positive constants c1 , c2 and a positive integer n0 ) (∀n  n0 ) [0  c1 g(n)  f (n)  def c2 g(n)], that is, informally, f (n) = (g(n)) ⇔, modulo positive multiplicative constants, g(n) provides an asymptotically tight upper and lower bound on f (n). Hence, modulo positive multiplicative constants, n log2 (n) provides an asymptotically tight upper and lower bound on log2 (n!). Of course, n log2 (n) has an easier to understand growth than does log2 (n!). 3 Note that environments, like other memory media, are malleable, and, then, may store different things at different times. 4 The concept of mind typically includes some reflective component, but the concept of cognition need not. The quantitative arguments herein are independent of whether a reflective component is included, but the argument comparing Inge and Otto with his notebook or robot does not deal with a reflective component of mind or cognition.

References Ashby, R. (1956), An Introduction to Cybernetics, New York: Wiley. Ashby, R. (1960), Design for a Brain: The Origin of Adaptive Behavior, second edition, New York: Wiley. Clark, A. and Chalmers, D. (1998), ‘The Extended Mind’, Analysis 58, pp. 10–23. Reprinted in P. Grim, ed. (1998), The Philosopher’s Annual, p. XXI. Cormen, T., Leiserson, C., Rivest, R. and Stein, C. (2001), Introduction to Algorithms, second edition, Cambridge, MA: MIT Press. Hutchins, E. (1995), Cognition in the Wild, Cambridge, MA: MIT Press. Simon, H. (1981), The Sciences of the Artificial, second edition, Cambridge, MA: MIT Press.

Suggest Documents