On the Aesthetics of Programming and Modeling: Part 1: Evolving Program to Model DRAFT 0.9 Paul A. Fishwick Department of Computer & Information Science & Engineering University of Florida Gainesville, Florida 32611, U.S.A.
[email protected] August 25, 2000 Abstract Modeling is increasingly being employed in computer programming. With the economies of graphical user interfaces and software design tools favoring new approaches to software visualization, there are many overlaps between programs and models to suggest that programming can be done effectively with modeling. Modeling can be made more effective through an injection of aesthetic principles. We overview the linkages between modeling and programming, suggest an aesthetic extension, and then present a set of seven core issues that highlight challenges to the new modeling approach.
1
Introduction
This is the first of a two part paper on the incorporation of aesthetics into programming and dynamic system modeling. The first part illustrates past, present and future connections between computer programming and modeling. The second part appeals for the need for aesthetics in modeling, in general, and illustrates modeling examples from our rube research Project. The singular goal in the two parts is to emphasize the importance of aesthetics in modeling enterprises normally associated with computer science and, more generally, with science and engineering. It is a propitious moment in time for aesthetics to bloom within computer science, since the cost of computing hardware and software has dropped significantly and the personal focus in user-interface design has become more pronounced. These factors, along with the increasing dependence of computer science on the task of modeling, help promote the connection between human and machine. Humans are fundamentally allegorical, analogical, and metaphorical. This is the way we 1
understand and interact with the world. Art has played a key role in this interaction, but the economy of technology has driven computing and scientific world representations into the realm of the very abstract and arcane. To make a model of the human circulatory system, we have had to make do with terse mathematical dynamic models even when the system itself has been graphically rendered to a photorealistic level. Likewise, to make a model of a program, we sketch stick-like figures and make do with icons cast from projective Platonic geometry. It is not that these are ideal forms of representation, but that they were forced upon us in our concerns for efficiency and economy. As these concerns are gradually resolved, new aesthetic modeling representations emerge, which target individuals and their singular tastes; the importance of craft [50] is reborn within the modeling and computing disciplines. 2
Motivation
When one thinks of computer programs in their complex and cryptographic text-based structure, the idea of aesthetics does not generally come to mind. Knuth [43] authored a classic series of books entitled The Art of Programming, and later created an approach to text-based software development termed literate programming[44], in which programmers develop more readable programs that, in themselves, serve as complete textual, typeset documentation. For the most part, software is text-based, and its typical incarnations seem incongruous with the sorts of productions commonly found in fine art. Why are programs limited to text, and what directions can we follow to pursue path of artistic programs? To make software artistic, we need to endow software with some form of aesthetics. Aesthetics is the study of what it means to be beautiful and pleasing to the senses, and is viewed as a foundation and unifying theme of art. We do not delve into the complexities of aesthetics other than to suggest that an aesthetic direction implies one that yields art as a process and product. Rutsky [67] asserts that in “high tech,” in which programming certainly finds a home, “technology becomes much more a matter of representation, of aesthetics, of style.” We begin with a brief historical romp through the history of programming to find connections to modeling and to make eye contact with aesthetic structure. Most successful programming languages, since their inception in the 1950s, haven’t changed much in appearance. Today’s programs tend to be long sequences of text and symbols, occasionally punctuated with symbolic delimiters. Programs first entered the world as raw machine code in a binary encoding, followed by a hexadecimal encoding, and evolving into assembly language, which has the significant benefit of being symbolic rather than numeric. It is not that anyone wished to program in such a fashion as an ideal approach to program representation. Instead, this state of affairs was largely dictated by the limited hardware at the time. Symbols required translation tables, which were uneconomic 2
for a period until memory could become affordable to justify such luxuries. Assembly language eventually yielded to the benefits of translator construction based on powerful parsers. Languages such as Algol, FORTRAN and Lisp were born, and found an eager population of programmers who could finally communicate with the machine in something that remotely resembled natural language. Programs are often referred to as code, and if the uninitiated were to view a program, the programs would indeed appear to be in “code.” They would appear to be encrypted. In many ways, programs are encryptions with a key. The key to unlocking the formal secrets of code is a strict understanding of the symbol translation – an understanding which is learned over many years. Where is the programming evolution leading? Will we be programming in symbols and text for the forseeable future, or are there other avenues to explore that make programs more accessible, not only to the average person, but also to the programmers? Even though programmers, as a loosely knit group, may thrive on complexity, they also wish to ease the coding process, as early history of software has shown in the evolution of programs. My central tenet is that it is both possible and desirable to craft software using aesthetic principles. The future goal of programming and computing lies at the crossroads of art and computer science. Achieving this goal requires laying down two important stepping stones which will carry us from program to artwork. The first is that we should craft programs as models. The second is that models should have aesthetic form. Combining both together yields the desired hypothesis programs should have aesthetic form. To the extent that aesthetic form requires a focus in art, the mechanisms, means and ways of art will become increasingly central to the programming discipline. 3
Definitions for Program and Model
A program is a set of instructions for carrying out operations on data. This set involves sequence, iteration, assignment, and conditionals. Different programming languages and their styles [32] create paradigms of computing, and still they share the following components: an ordered set (i.e., sequence) of statements, looping constructions for repetition of instruction groups, side effects for storage of data in variables and data structures, and methods of branching dependent upon truth-functional expressions. Even though programming languages tend to be arcane and involve abstract syntax, the idea of programming is intuitive and ubiquitous in every day life. For example, to buy milk at the store, one can set forth the algorithm in Table 1, which takes us to the store and back to the car with a bottle of milk: Even though the above can be considered both an algorithm and a program, it is specified in a form of natural language pseudocode. Implicit loops are embedded within statements 6 and 7. Finding something requires iteration and conditionals—checking to see whether an item has been found, and when it has been found to halt iteration. If this 3
Table 1: Algorithm for Buying Milk 1: 2: 3: 4: 5: 6: 7: 8: 9: 10:
Get into car Travel to the store parking lot Search next available parking lot row If empty spot found, go to 5, otherwise go to 3 Walk to store Find milk in store Find the shortest checkout line Wait in line to pay for milk Transact milk purchase Walk to car
is all there was to programming, we would declare everyone an expert programmer and then move on to more difficult problems. The problem is how to express the above in a way that is meaningful and memorable, since modern programming languages tend to involve more complex, symbolically terse, constructs. The get-milk program is one that is part of the human’s repertoire, whereas the second program in Table 2 is more familiar to computer users since it reflects a program that would fit squarely inside a computer, not a human. The program is to allow people to edit text on a display monitor. This type of program is called a text editor. Emacs, vi, and NotePad are example text editors. Our design is to have a main entry/exit screen. From this screen, we transition into one of two modes: text and command. In text mode, the human enters text, as with a typewriter. In command mode, the user enters commands as to what to do with the text, such as cut,copy, and paste. There is a special macro mode where a sequence of editing commands and data can be bound to an arbitrary key. Keys are those commonly found on a keyboard: ˆ M means control-M, ESC means the escape key. A model is a representation of something. A source object X represents a subset of attributes belonging to target object Y. An analogical connection [73] is made between seemingly disparate objects. Semioticians refer to these objects as signs. X is called “a model of Y.” There is no uniquely identifying physical characteristic that makes an object a model since modeling simply identifies a relation between two objects. There are many types of model. A scale model is a small version of an object where homology is preserved; The scale model preserves a uniform shape transformation from source to target. A shape model is used in the computer graphics literature [31, 36] to describe complex shapes and 4
1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13:
Table 2: Algorithm for a Text Editor Main program entry/exit screen displayed If any key is pressed go to 3, otherwise go to 2 Enter text mode Process text entered by user If key ESC is pressed, go to 7 If key ˆ M is pressed, go to 11, otherwise, go to 4 Enter command mode Process commands entered by user If key ˆ X is pressed, go to 1 If key Q is pressed, go to 3, otherwise go to 8 Enter macro mode Process macro entered by user If key EN T ER is pressed, go to 7, otherwise, go to 12
how they fit together. Two examples of a shape model are octrees and boolean operator trees. The trees are objects that define the shape of another object. An information or semantic model refers to a representation of semantic relationships among concepts or objects. These two types of models are common in the database and artificial intelligence literature. Another type of model is a dynamic model, which consists of elements that reflect the dynamics or behavior1 of the target. This type of model is very common in science and engineering, and has become more common in computer science as we make models while designing software. Dynamics models are most relevant to programming since their purposes are similar at a fundamental level: programs and models encode structures that define behavior. 4
A Model of a Program
We will craft a model that takes the place of the program in Table 2 by constructing a Finite State Machine (FSM), which is a type of dynamic model: 1. F SM = {I, O, δ, q0 , Q, λ, S} 2. I = {set of keys} 1
We will use dynamics and behavior interchangeably.
5
3. O = Q 4. Q = {q0 , q1 , q2 , q3 } 5. δ : Q × I → Q 6. λ : Q → O 7. S : Q → {M ain, T ext, Command, M acro} This is a formal model of the program in response to user input. The symbols are as follows: F SM is the tuple that contains the sets and functions found in the model; I and O are the input and output sets: Q is the state set; δ is a function that takes the current state and input and produces a transitional state; λ is also a function that maps the state to an output; and S defines English textual symbol strings that map to each state. The term “formal” is meant in the sense that the model is mathematical, employing algebraic constructs as a way of defining syntax. The semantics of this model are known only to the extent that someone is intimately familiar with set theory and discrete structures. There is very little difference between syntax and semantics in the purist sense of these words. These two terms identify representations. If one believes, for whatever reasons, that they understand a particular representation, they call this semantics. In semiotic terms, syntax and semantics are simply collections of signs. Most models are significantly different than textual programs and formalism in that they appeal to the visual sense. However, in the limit, any representation can be a model, including pseudo-code and textual programs. There is no reason why other senses cannot be targeted, but visualization being our primary form of human sensory input serves to characterize most types of models. Fig. 1 displays a two-dimensional graphic that represents the previous formalism. Circles are states, and directed arcs are transitions from one state to another based on conditions that are labeled adjacent to each arc. The machine begins in the start state q0 (Main). From this state, the machine will stay here until a key is pressed. To illustrate an analogy between systems, we present Fig. 2 which is isomorphic to Fig. 1, but models something completely different. Both figures represent models of software, but there is a difference. Fig. 2 models a physically plausible scene with boiling water undergoing internal state change. Moreover, Fig. 1 is translated into software and so is a replacement of sorts for a program. Modeling terminology is somewhat loose to the extent that Fig. 2 can be a model of a program, which in turn is a textual model of boiling water, or one might just say that Fig. 2 is a model of boiling water thus bypassing the program. Fig. 1 is more interesting since there doesn’t appear to be a physical target for the model. However, one can be posited. Consider Fig. 1 to be a model of a futuristic typewriter that is unconstrained by physics or the typical functions of what we’ve come to expect from typewriters. While the creation of this pretend physical contraption may seem unusual, it firmly grounds the notion that modeling and programming are very 6
key
q
1
^M
any key q
q0
^X
key
ESC
q
2
q
3
ENTER
key
Figure 1: FSM with 4 states modeling a text editing program.
much alike. Fig. 2 is a model of water undergoing temperature change in response to an external input (knob) and two internal temperature events. The machine begins in the start state q0 (Cold). From this state, the machine will stay here as long as an input knob is in the off state. When the knob is off, I = 0, and it is I = 1 when the knob is turned on. These are called external inputs since they come from the knob. I = 2 represents an internal state transition that occurs when the temperature of the water reaches 100 degrees (e.g., boiling). Likewise, I = 3 occurs when the temperature reaches zero (e.g., freezing point of water). Figs. 1 and 2 make sense only when one learns the semiotics inherent in such diagrams. After seeing many such diagrams, the meaning of connection between symbol and picture is often taken for granted, and yet we need to remind ourselves that the connection is completely arbitrary. Convention, and an attempt at some sort of standardization is behind the simple figure, but its particular composition is necessitated by economy. Drawing circles is simple and fast. 5
Programming, Software Engineering and Modeling
Fig. 1 indicates a style of building software that has proceeded in synchrony with objectoriented design. Object-oriented software design began with Simula [41] in the late 1960s and progressed to Smalltalk [49], which contained visual components. Objects in these languages could indicate real objects or conceptual objects without any obvious realworld equivalent. Visual formalisms for object-oriented software development took hold 7
I=1 q
1
I=2 (T=100)
I=1
I=1 I=0 q0 I=3 (T=0)
q
I=0
I=1
q
3
I=0 2
I=0
Figure 2: FSM with 4 states modeling heating water.
in the late 1980s and early 1990s [69, 66, 14]. The differences between programs and models are echoed in the perceived goals in creating programs versus engineering software. If programming and software engineering were generally considered to be identical, the shift from programming to modeling would now be a common activity, and modeling would have a much firmer foundation in computer science. In most computer science curricula, there are introductory courses on programming and software engineering. Programming is meant to allow someone to be able to write executable software for a computer, while software engineering is generally considered to be a larger enterprise needed for designing, maintaining and building complex programs. One goal is to be able to build software through object-oriented mechanisms, without resorting to programming, but this goal has not yet been achieved for several reasons. First, there are not enough good graphical tools that will automatically translate diagrams and more humanly accessible constructs to code. Second, the software community is not completely solidified and in agreement on which diagrammatic methods to employ, although the Unified Modeling Language (UML) [51, 46, 15] is gaining considerable momentum since it combines earlier methods from Booch [14, 13] and Rumbaugh [66]. Visual formalisms for object-oriented software began with the Software Engineering research and is being made manifest today by 2D graphical methods for FSMs such as StateCharts [35]. The software engineering community realizes the benefits of modeling, but more progress is needed before we are able to launch students into an environment where they learn how to program by designing software models from the very beginning, without first necessarily resorting to text-based programming languages. 8
The existing programming languages may well fall the way of assembly languages in the 1980s. They will be used less frequently to the point that their existence will form the substructure of the software that we build, but we will have progressed to better interfaces by evolving a model-based approach to software. For now, programming courses are still taught to thousands of students, and only a handful of those go on to take software engineering courses where the modeling philosophy is instilled. Thus, software is best constructed through modeling, but even our academic curricula have not blended them; some believe that programming and software engineering are two separate enterprises with no convergence in sight, or warranted. As a matter of pragmatism, traditional text-based programming languages will continue to play a significant role in the computer scientist’s arsenal. There will be a steady evolutionary trend, however, toward an increased use of the senses to make programming more widely available and easier to perform. In the past, there were fewer differences between programming and modeling. Analog computing is centered around using physical objects as computing elements. Computers [24] can be structured out of the most mundane, but accessible, objects from spaghetti sorters, string-networks, and pulley-based logic gates, to Tinkertoy computers that solve for tic-tac-toe [38]. Early forms of analog computing fostered computer programs and models simultaneously. There was no difference between the two. The move toward digital circuitry, while efficient, caused a bifurcation between models and programs since the digital program is chock full of abstract entities. The connection of program to reality had become tenuous. Systems Theory [11, 57, 40] and its follow-on principles of Systems Engineering [78, 68], beginning with the study of Cybernetics [76, 7], provided a uniform view of system. Specializations of systems to handle discrete events described systems with irregular time intervals [42, 81], which were more attuned to how programs tend to operate. A system is defined through a model and contains, input, output, state, as well as mapping functions from input and state to output. It matters little whether the system is natural or artificial [71]; a system can represent a computer, program, pendulum clock, or a cow. The theme of designing systems of software continued into the field of Software Engineering during the 1970s. Software engineering diverged from systems theory, and yet retains its lineage in the sense that software is a type of system. Neural networks [4], simulated annealing [1], and genetic programming [45] are all methods that employ physical metaphors for computing; their internal representations are models of hypothetical, virtual phenomena in their domains. Some of the areas have been combined into a more encompassing area of natural computation [8]. The use of large numbers of entities, indicative of swarms and ant colonies [3], envisages computing as a natural collective. These models act as programs, and are organized around programs as vehicles for optimization; as long as a program can be organized around the goal of optimization, the program can be modeled by computational means suggested by natural 9
metaphor. In a larger sense, there is nothing particularly unique about natural phenomena that creates this paradigm for computation. One could just as well use steam shovels as a computing paradigm. The more pressing contribution, even more than of computation as optimization, is that natural computation involves physical objects. It is object oriented and grounded in items more familiar to us than arrays, pointers and data types. Software is rapidly becoming distributed and this leads to thinking of software as associated with specific pieces of hardware where the computing elements live. When software is executed on a mainframe computer, one could still do object-oriented design, but this sort of design has gained momentum as a natural consequence of having distributed software. It is difficult to avoid doing object-oriented design when the objects house the software components. Designing a distributed system naturally implies the formation of models. The term real-time software/system [34] also suggests a close correspondence between model and program. Software for a real-time system is closely coupled to hardware, and this hardware is connected together in a way that can be modeled. The model can serve both to assess the performance of the system and to design the software that drives the system itself. When the physical system becomes relevant to the software design issue, the task of modeling becomes a natural conduit for its complete design, maintenance and fabrication. When software is mainframe-like, and monolithic rather than distributed, the creation of objects is made possible by posing the rhetorical question How would I design a model if this program were distributed? Many languages targeted at novices [58] are model-based. The Logo language [59] was one of the first languages based on the idea of programming through the use of a turtle [2, 64] capable of carrying out a set of simple instructions, with graphical feedback for output. Karel the Robot [60] has similar aims with a robot replacing the turtle with a robot, and by extending the functionality of the moving agent with respect to its interaction environment. Methods of programming by demonstration or example [48, 23] are based on agent-based approaches to software development. The idea is to program through modeling, and where there may not be a natural physical object in a program, a microworld can be created so that the programmer benefits from thinking in terms that are natural, memorable, and aesthetic rather than artificial, arcane and ugly. Several commercial products exist, such as Stagecast [72], which allow modelers to create simulations using graphically-specified production rules. Modeling is the activity that allows humans to better reason about programs. The overall areas of software visualization [74] and visual programming [39] have produced many similar successes to those of programming by modeling, example and demonstration. There are areas where the importance of visualization and sensory-feedback are stressed, such as Human-Computer Interaction (HCI) and Visualization. In HCI, we obtain example research involving visualizing information [19] and texts on interface methodology [54, 53, 63]. Employing visualization and metaphors permits the human to better interact with the computer. For visualization, there has been significant work in visualizing data, program execution and 10
software. Data visualization is, perhaps, the most active field where scientific and engineering data are viewed from multiple perspectives, in 2D and 3D, using a wide variety of icons and color range. Brown [17] discusses methods for visualizing the execution of programs in terms of input/output. Shu [70] specifies a dichotomy where we have visual environments, referring to program and data visualization, versus visual language, referring to the actual creation of software using visual methods. Early visual software developments were catalogued in an edited volume by Chang [21] and recently in a special issue of Communications of the ACM [48] on “programming by example” where the goal is to make programming easier through simulation and demonstration via rules. A quick search through the literature on keywords “3D” and “programming” tends to sprout forth numerous articles and books on how to render 3D graphics. By turning this idea on its head, we imagine that 3D itself is used to do the programming, rather than vice versa. The area of 3D programming is related research to ours and represents a relatively new area that holds much promise, especially with new 3D web-based technologies such as Java3D and VRML. Programming in 3D had to wait for efficient methods for 3D programming, but significant work has been done. Lieberman [47] pioneered one of the first efforts in transitioning from 2D programming to using 3D elements. Najork [52] created the Cube language, with cubes representing program nodes and pipes for flow. Oshiba and Tanaka [56] built 3D-PP, which contains regular polyhedra connected with lines for representing declarative, logic-based programs. The history of the relationship between software and model development is a rich one, and it suggests some straightforward questions about its method and success. Is it easier to program with icons, pictures, and model components? Is it ahead of its time, with more practical methods being necessary for the immediate future? Petre et al. [61] surface many questions that help to gauge the efficacy of visual programming. To summarize previous areas, we outline a significant convergence: • Metaphors and paradigms played out in software engineering have encouraged the study of classes, objects and components. Such terms suggest real-world analogs and a view of software as being composed of collections of things, most with concrete names. This tendency promotes a modeling software engineering view. Issues: The use of metaphor should be exercised with care so as to pick something that the user finds intuitive and logical in the resulting human-computer interface. • Visual methods in component and object design, as evidenced by software design movements such as UML, create software designs that are very much like the models used by systems engineers and simulation scientists. This connection is strengthened by the real-time distribution of software, forging a union between a piece of software and a real-world entity. Moreover, visual programming research has been ongoing for at least two decades. Issues: more multi-platform software support of 11
design approaches like UML are needed if we are to teach and perform visual design before programming, or at least in conjunction with it. Web-based support for graphical user interfaces will help using Java Swing [75], XML-based Scalable Vector Graphics (SVG), and 3D graphical objects [79]. A critical mass of software support is required before graphical approaches will become practical. There is little doubt that the methods are succeeding for novices. The challenge is to support experts as well. • Various forms of natural phenomena have encouraged the creation of software through models, as with simulated annealing and genetic programming. Given a program design in terms of optimization, the program can be obtained by executing models of artificially posited natural phenomena. The connection with the real-world is valuable, as it is within object-oriented design. Issues: natural computing paradigms are receiving much attention in Computer Science, and yet it is not clear how broadly they may be applied to software in general. Since the “software” is frequently adapted and semi-automatically converged-upon, it is difficult for a human to understand what is happening inside the program. To wit, neural networks and genetically-grown programs have this problem since there are no sub-models inside the overall natural neural or genetic metaphor. It is difficult to “see inside” the program to reason about its behavior. This issue has touched off informal debates about whether programming might eventually become completely automated, mitigating the need for seeing inside. Why create software by hand if we can grow it to suit an optimal criterion? For many types of software, this is logical and inevitable. However, one of the reasons we create models and programs is that modeling is an enjoyable human activity regardless of the existing state of automation. 6
Aesthetic Models
Our programs and models lack aesthetics primarily because the idea of being beautiful is not considered part of the traditional software requirements design lifecycle. Our models in Figs. 1 and 2 tend to be graphical, which appeals to the visual sense, and yet they tend to be dry and abstract. A scale model such as a figurine is really not that different than a flat model made from circles and arrows. To engender a different mindset for modeling programs and encouraging aestheticism, we need to step back and look at modeling from a broader perspective. A source clay figurine (X) may model a target cow (Y). The complex modeling relationship between X and Y can be defined as a set of functions and mappings that take elements of X and restructure them in terms of Y’s elements. The clay figurine may likely not be anatomically correct, and thus anatomy is not preserved during this particular mapping; however, the overall shape and topology of the cow may 12
be preserved since there are legs, a body and a head in both X and Y, and in all the proper relative positions. In general, the mapping between source and target is, by definition, incomplete. An object X is a perfect model of itself, in the limit, which provides us with a reflexive modeling relation. The source is chosen based on the familiarity and accessibility of that object to the human interpreter. In a study of analogy in science, Hesse [37] points out that the mapping is couched in positive, negative and neutral analogies. The cow figurine will have characteristics that are not present in the real cow (e.g., a negative analogy), whereas the cow’s shape may positively map to the real cow. The fact that negative analogies exist is not cause for concern, since it reflects our own creative energies in crafting models, and introducing aesthetics into them. As in film, humans are adept at creating a suspension of disbelief. This is a key aspect of our ability to imagine. We are able to selectively pick out the parts of the model that represent positively correlated maps to the target. For example, in the Bohr model of the atom, electrons whirl around the nucleus. One might imagine balls, like planets, orbiting a solar nucleus. Nobody suggests that electrons are spherical, or made of wood or plastic, and yet the model still serves an important purpose since it provides an explanation. Models provide human understanding and appreciation for the target phenomenon, despite the complexity inherent in the source-target mapping. While we might thirst for a formal isomorphism or reverse target-to-source homomorphism, most practical models will not yield to these ideals. We choose materials, shapes, and objects that are familiar to us, and these serve as the bedrock for our modeling activity. As Black [12] points out, it is not that we are creating visualizations of phenomena during model design, since visualization is a side-effect of that which is familiar to us. We understand the world through familiar cues from all five senses. It is this aspect of familiarity which drives our understanding of the world. It is not necessary to limit ourselves to geometric primitives during model design; these primitives have little in the way of familiarity. Circles, rectangles and cubic splines are economically-motivated crutches. In relating scientific modeling to the creation of myth and metaphor, Barbour [9] points a path that lies in between realism and fictionalism for the role of models. As Eco points out in A Theory of Semiotics [25], and as explained by Bennett [10], “if a term cannot be interpreted, it becomes explainable only by another sign. Therefore, categories will overlap, and we have a process of infinite recursivity, unlimited semiosis, in the form of a labyrinth..” When we speak of connected models rather than signs, we term this labyrinth a multimodel [27, 30, 28, 29]. Others may see this as a vast knowledge base of modeling information, to culminate in knowledge warehouses and semantic networks [77]. The model’s size is modified to make the actual phenomenon more humanly accessible, and to breed familiarity of the target within those who interact with the model. It is not hard to imagine that many scale models are highly crafted and represent pieces of art. Even an anatomical cow model, with visible, removable organs is artistic and may also be useful in veterinary medicine for training purposes. One can further imagine a collection
13
of models for the target, where the clay figurine models the anatomical cow, which in turn models the real cow. If the source objects are made virtual, using the computer arts (i.e., computer graphics, visualization, music, virtual reality hardware interfaces) then we can more easily access the individual models of the labyrinthic multimodel. Manipulation of the multimodel gives us many perspectives on one object. With the ease of virtual model construction using computer graphics, our models need not be limited to singular media. An artist might tap on the virtual clay figurine to yield organ placement information, and the veteranarian might prefer to catalog the cow as a figurine, which can deliver other model-based characteristics upon request. Fig. 3 represents a somewhat different view of the same dynamics as found in Fig. 2. Individual elements were taken from individual images and overlaid to create the FSM in Fig. 3(a). The dark 90 degree highlighted edges denote state transition. One must “interact” with Fig. 3(a) to surface the equational conditions. Such interaction can be fast and efficient, built using overlays to see state names, transition conditions, and symbolic links back to the formalism. An example interaction involves the mouse passing over regions of Fig. 3(a). Fig. 3(b) illustrates an alternative design using an architectural metaphor— rooms represent states and doorways, transitions between states. The columned fac¸ade toward the left in Fig. 3(b) symbolizes an entryway into the start FSM state, so it is clear where the “machine” begins execution. The transition from Fig. 2 to Fig. 3 requires some indepth discussion. Fig. 2 is a fairly typical 2D graphical representation of an FSM if one is familiar with the notation; circles represent states, and directed arcs represent transitions. Fig. 3 introduces an aesthetic component and would not be familiar to the computer scientist. There are several key issues: 1) Motivation, 2) Information, 3) Economy, 4) Communication, 5) Expertise, and 6) Education. We use the perceived dichotomy between Figs. 2 and 3 to underscore each issue. 7 7.1
Issues Motivation
What is the motivation for Fig. 3? In general, we are aesthetic beings and are generally motivated toward aesthetics in our methods and cognition [62]. There is more to modeling than an efficient, information-theoretic transmission of structure. Numerous human cognitive skills are involved, such as aesthetics, mnemonics, concentration, attention, and comprehension. We construct Fig. 3(a) with aesthetics in mind and position the states in approximately rectangular regions, overlapping one another. Fig. 3(b) contains rooms and an enveloping space for an avatar to move inside of the state space, to explore the dynamics from the inside looking out. The Fig. 3 designs may be pleasing to some subset of 14
(a) 2D sketch
(b) 3D architectural space.
Figure 3: Alternate designs for Fig. 2.
15
the general population, and in particular, to individuals who might prefer one style versus another. The states are more visually represented, and if sounds were attached to each state, we could hear the current FSM state as the machine is simulated. These digital additions heighten a sense of awareness, and the piece is more memorable and familiar than Fig. 2. Is Fig. 3 the Computer Scientist’s epitome for syntactic sugar, exemplifying inefficient mechanisms for storing and retrieving information? The existence of the computer science term syntactic sugar, by itself, suggests that programmers have an inherent bias toward abstraction and away from sensation; however, this may be changing in an era rife with research in visualization of data and program. It also indicates a history where programming is viewed and taught as a branch of mathematics. There is good historical reason for this, since programming has evolved from researchers such as Turing, Post and Church, who were mathematicians above all. Glass points out that software is in need of informal and creative methods, with protocol analysis results suggesting that modeling and simulation play a key role in software engineering design[33](p. 178) Mathematical formalism plays the role of creating an infrastructure, but it is not complete, and doesn’t reflect how humans craft software. Fig. 3 breaks away from a pseudo-standard for FSM depiction, and some may question why the status quo is not upheld for the sake of more people understanding it since they will have been taught about graphical FSM structure using the more traditional symbols. The argument for inertia can be strong, and suggests that we often become wary of change, and yet aesthetic urges serve as our primary motivator for considering the possibilities of modeling as we might in Fig. 3. 7.2
Information
Regarding the transmission of information in the canonical theoretic sense, Fig. 2 appears to have more information, and Fig. 3 less information. For example, the conditions on the arcs are present in Fig. 2 but not in 3. The interpretation of information requires the recipient to know the “rules” for decoding the information stream delivered from the sender. Someone learning FSM models for the first time will be equally confused about both representations. Until one teaches the mapping of symbol to meaning, all representations appear to be in code. However, Figs. 2 and 3 can be seen as virtual entities connected to other entities based on user-interaction. This is accomplished if we imagine both figures on a computer screen. As the user scans Fig. 2, they locate the mathematical arc conditions. When the user interacts with Fig. 3 the same conditions appear, say based on mouse or pen movement. There is time taken in scanning and mouse-clicking and so with virtually represented models, the issue of trying to put all information down on a hardcopy medium, becomes moot. The information is part of the multimedia, whether one scans, zooms with a magnifying lens, or clicks with the mouse. All of these activities take time [18], with analytic tools such as Fitt’s Law being applied to measure human stimulus-response. It is rare that all necessary information about an object of interest 16
has an immediate presence. In the physical world, sensors such as telescopes and microscopes extract increasing amounts of information when used. Virtual sensors perform similar functions. Virtual lenses, VCR controls, scrolling, panning, and many other forms of user-interface are necessary to refine and expand all available information content. Not all methods are equivalent in terms of efficiency and ergonomics, but efficiency must be weighed against individual aesthetic requirements and taste. The definition of model today is implemented as multimedia; models are connected in vast networks, to create multimodels [28]. No phenomenon has a single view, perspective or layer. This concept is nicely surfaced in the role of the Cubists during the 1920s. The creation of models must take this network into account since the modeler will traverse and explore the network. Multimedia on the web implies dynamic content, and so information content is multi-facetted for models as it is for hypermedia on the web. The idea of immediately grasping everything about model representation is illusory. Eyes do scanning, fingers operate a mouse, and auditory modeling cues must be sequentially streamed. Models, like the phenomena that they model, are to be treated as highly dynamic and interactive constructions. The linkages among models need to take us from the very aesthetic to the very formal, at lower levels of abstraction. The goal is not to replace formalism, but to augment it. Likewise, the use of all forms of model is to be encouraged, so that our models link text, natural language, and equations with objects traditionally considered to have more aesthetic content. The driving point in the representation of a model as a multimodel, with many perspectives, is that a model defined by a single view is no longer complete or acceptable. The focus on multimodels leads us to issues regarding model comparison. In science, it is common among modeling groups to socialize with those of a familiar ilk, and espouse media promoting one model type over another. This is common in art as well, with someone claiming that for example a gothic style is inferior to an impressionist style. We need to embrace the idea that diversity in modeling is a positive thing. There is no convincing proof that using one modeling technique is superior to another when taste, familiarity and personal views are at stake. The aesthetic form in Fig. 3 may be seen as troublesome based on perceived information transfer. There are three issues with regard to this problem. The first issue is where Fig. 3 doesn’t add any information over Fig. 2 and is therefore deemed superfluous. The underlying assumption is that economy of information flow is the main driving force behind modeling. But, this is not true since Fig. 3 also might improve aesthetics (feeling,pleasure), mnemonics, concentration, and attention span. It is not enough to use information theory as the sole criterion needed to judge the model’s efficacy. The second issue is where Fig. 3 is seen as adding the wrong information, over information transmitted via Fig. 2; however, this situation is inline with Hesse’s comments on negative analogy inherent in all models. Negative analogies exist but humans appear to have little problem in separating out what they should study versus what they should ignore in model structure. One can also consider that the circle radii and spline arcs in Fig. 2 are 17
confusing since they are not part of the model structure and are to be ignored. The bark and bump maps of a computer-graphics rendered tree for visualizing hierarchical data addresses our familiarity with trees at the expense of our having to differentiate desired information content from tree bark. Humans learn to distinguish the salient parts of the model from those features that have no effect. There is no need to minimize sensory information unless it is truly overwhelming and defeats the modeling purpose. The third issue is one where Fig. 3 is seen to have less information than Fig. 2. This problem is alleviated when realizing the models in a multimedia, virtual form, where the models are seen as always dynamic and susceptible to human interaction. No model stands alone. Multimodels create networks of models, with the network being traversed by the user in gaining new information while realizing entirely new views. 7.3
Economy
The notion of economy is evanescent. As computer models become easy to create as moving wooden blocks on the floor, this has a marked effect on how we model. We are media-challenged only due to economic reasons, and when a medium becomes economic, this expands the structure of what it means to model. Fig. 3 may be seen as inferior based on the lack of economy in its creation. Making circles and arcs can be fast and economic, whereas creating 2D images and architectural forms are more complex and costly for the modeler. However, the models can take the same time to create. The 4 state rectangles in Fig. 3(a) were pre-built icons from a library, not unlike clipart. Likewise, Fig. 3(b) is constructed from plug-and-play 3D parts from a web repository. That which may appear to be uneconomical today, is economical tomorrow, and so economy is an ever-changing target. Many tools exist for importing images, taken from the Internet, and placing them in a scene. Domain-specific simulation tools will usually contain pre-built icons, to save modeler time. At one time, visual representations such as Fig. 2 were uneconomical, and so formal, textual representations were the norm. No tools existed to permit quick and efficient FSM renderings, regardless of what graphical symbols were used. Likewise, 3D modeling and animation tools are becoming very economical. So, when one talks of efficiency, Figs. 2 and 3 are on an even keel. The user is free to choose a representation that is desirable. Usability and interface issues are still present whether or not the individual is making aesthetically motivated decisions. However, parsimony drives our choices only to the extent found acceptable on a personal level. 7.4
Communication
One of the purposes behind modeling is to create standards to improve communication among groups and individuals. Should we try to promote a standard view of FSMs, to enhance communication? Model types cluster in groups and often different groups 18
will take slightly different approaches to FSM representation; no one representation has the blessing of all modelers. We all appreciate creativity, but standard representations have their place. The concerns regarding standards and inter-modeler communication are valid, but the nature of media is changing. For the better part of the last century, broadcast and multicast were the major methods for disseminating information. The individual was not important, but large groups of individuals were, and so media blitzes that had no finely-tuned target took place on all forms of devices from radio and televisions to computers connected to the web. For most of our history, broadcasting has been the linchpin for most forms of information delivery. When we think of information transmission, we may think of efficiency, general appeal, and experiments on humans to streamline the graphic design process. This view toward media transmission is changing to favor the individual, and so generalisms on what constitutes “good design” becomes moot; good design is relative to a particular person. The individual chooses and controls the artistic media in which they operate, and thus what is good is what is of personal preference. A recent special issue touting the personalization of computing [65] bolsters this argument. Economies are evolving and individual choice is the new set of criteria for design. Borenstein [16] points out the dilemma for the “artist as interface designer” when dealing with a multitude of people for whom the designs are intended. The key problem is that the person using the design is not the one with any of the artistic control, and this is what is gradually changing. Principles for categorizing and grouping designs and artistic form are still valuable [5, 6], but at the end of the day, individual taste is the mark of success for the model structure. We are gradually changing to favor a 1-to-1 relation between media content developer and the recipient. Several examples of user interface design allow the user to create the interface and the way that they interoperate with the media. A good example is the use of skins for virtual panels as found in music software such as Winamp [55] and most music Audio and MIDI sequencers that employ a host of virtual machines in software. Another example is in the use of avatar design for multi-player 3D first-person games and shared virtual worlds [22]. The idea is that the individual is the focal point, and not groups of individuals, some of whom may have been tested via experiments to create efficient user interfaces that work for groups. On the other hand, such experiments are still vital for identifying trouble-spots and improvements in designs. Aesthetics plays a dominant role when individuals are allowed to customize their interfaces to the computer. A renewed individual focus doesn’t mean that group communication is tossed aside. It means that groups can evolve and form dynamically based on desired metaphors and appealing individual styles. In modeling communities, this type of group activity is already widespread. The individual-slant posed by new software has the potential to enrich all model forms with aesthetic qualities. There are levels of group communication of models that we have not yet studied. For example, in the extreme, each individual may have a singular model, and yet these models, are mapped through their underlying common model structures. If I settle on representing FSMs in the style of Fig. 3, then I can do this, and also communicate with others since I also know the formal 19
concept of an FSM. A person may own a unique pen, but they also know the concept of a pen (i.e., a pen has a case and an ink-delivery system) when communicating with others. Their understanding of core concepts for familiar objects doesn’t require them to own or use generic pens. Prior to the invention of Gutenberg’s metal typesetting for books in the Middle Ages, tools and techniques for mnemonics [80, 20] played a major role in communication. Eco [26], in The Name of the Rose, weaves a fast-paced tale full of semiological mappings. The Monastery library, for example, contains a room layout that maps naturally to the relative positions of European geography. Physical objects can represent anything and can be signs [25], in addition to representing other objects and functions. After books became more economical, the perceived need for mnemonic tools began to wane; books could store information for the collective. However, aside from the need to create information repositories, mnemonic methods also reinforced a need for sensory involvement in the world. Books created a more uninformly abstract mentality concerning representation. The introduction of the computer allows for mnemonic principles to, again, flourish in the aura of the human-computer interface. The use of model structure to communicate structure and dynamic models also brings us to the question of our own belief structures and ontologies—are the models real or completely fictitious? We must be careful about labels that we place on structures that we create during the modeling process. For instance, one might declare that the mathematical model represents the true model and that Fig. 2 is a visualization, or that Fig. 2 is a model and that Fig. 3 is a cartoon. These sorts of semantic labels identify fundamental modes of viewing models, and casting our own representations of reality. Everything is a visualization if this term must be used to emphasize one sense above the four others. One would not stand by a window seeing an oak tree outside, and say “that is a beautiful visualization of a tree,” and yet it is no less a visualization than a charcoal sketch of a tree. Magritte demonstrated as much in his painting of a pipe. Claiming that one thing is a visualization and another thing is the model has the potential to artificially fragment the modeling craft. Every real world object can be a model since it is the role an object plays towards another that makes it a model, and not inherent physical structure. 7.5
Expertise
One might suggest that Fig. 3 is simpler to understand and more appealing, and therefore it appeals to a novice level of expertise. First of all, this is true; making models more aesthetically appealing does help novices, but it also helps experts. A consistent trend in computer technology is for novel devices and interfaces to begin with novices and then proceed toward experts to where everyone is using them. Experiences with the Apple Macintosh (i.e., Mac) support this claim. In 1984, the Mac caused a sensation by 20
having a much improved user interface, based on earlier work at Xerox PARC. Many experts ignored the Mac, preferring to work with line editors and arcane operating system commands. In some cases, this avoidance was appropriate where the desktop operations took significantly longer than their symbolic counterparts. The feeling was that the Mac, with its visual, pleasing window environment, was strictly for novices. In hindsight, we recognize that this desktop metaphor took over and is prevalant regardless of level of expertise. The automobile has a similar, albeit longer, history. New features were added for “novices”, and these features were gradually acquired by experts as they were found to be useful. Ideal interfaces allow novices and experts to interface with a model, allowing experts more control and options should they decide to avail themselves of these benefits. 7.6
Education
Inertia plays a key role in the acceptance of new technologies and methods. Fig. 3 will probably look natural to a child and non-standard to an adult who has studied FSMs. It is easy to underestimate the effect of inertia on one’s education in an area. I have seen figures such as Fig 2 so often in my formal education that when I see something different, I question its utility. Why is it different, and why would I be tempted to switch to something else? While wrestling with this concern, we should recognize that other elements of our ontological existence are undergoing a regular upheaval. Model representations that seem common today will be antiquated in 20 years. The impact of inexpensive 3D at home will change the type of information delivery expected by our youth. Stick figures and simple Platonic solids and thin-line polygons are likely to go the way of the slide rule. Still, a convincing approach is to specify a gradual evolution from existing modeling methodology to the desired goal. It is likely that modelers prefer smaller, incremental steps than to completely revamp their working strategy. 7.7
Physical Objectivism
We have declared three model forms (equational, Fig 2, and Fig. 3). For consistency, we should note that these 3 models are physical. Mental representations are not capable of being transmitted or recognized as such without resorting to delivery through a physical medium. The formal, symbolic FSM is just as much a visualization of temperature dynamics as is Fig. 3. Whether we pick up tiny pieces of paper or use wooden blocks, the technique of modeling remains the same—relating physical objects through analogy and metaphor. The presented models are ink impressions on a medium. They may be the paper you hold in your hands or the cathode ray tube in which you stare. Therefore, they are objects that model the dynamics of water, another physical object. This view allows us to easily extend our models into 3D dimensions. The simplest method of making 3D models is to minimally extend the 2D into 3D by adding an extra parameter or degree 21
of freedom. For example, the circles in Fig. 2 would become spheres, and the arcs would become pipes. However, there is nothing limiting us this simplistic extension. I can create an FSM using an architectural metaphor by creating rooms for states, and doorways for transitions as in Fig. 3(b). Regardless of model type, the modeling relation bridges two objects through metaphor. The word “visualization” can sometimes be misleading. As Black [12] points out, the visualization of a structure is actually the creating of something that is familiar to us. It so happens that we receive around 70% of our information through the visual cortex, and so visualization is a key sense, but not the only one. Sensation might be a more accurate term. 8
Summary
Software engineering is a mature field rife with discovery and an ever-expanding supply of good design paradigms. Paradigms such as natural computing, and design with objects, components and patterns are evolving software that has real-world ties. The object-oriented paradigm, in particular, is presently over three decades old and still going strong, although visual methods for object design have not yet entered the mainstream, despite efforts such as UML’s visual design methodology, and its strong foundation. I can only suspect that much of the hesitation is due to a general lack of available tools for visual design since user-interface design based on the desktop metaphor has demonstrated overwhelming success in utilizing visual design for operating system software. Our view of programs as models has a fairly strong counterpart in the software engineering literature, and yet programming in text-based languages remains the entry-level activity for college freshmen. An eventual goal should be to ween people gradually off the textbased languages as they were weened off assembly languages in the 80s. This does not imply creation of weak visual languages, since the visualizations, and their translation to an underlying code, must be thorough and be as capable and flexible as Java and C++. UML is a good beginning in this direction since it has powerful methods for designing the semantics of software components, and it has a solid base of users. Our goal to make models aesthetic is an even further step along this evolutionary path. We have discussed a number of core issues in Sec. 7, since they are viable concerns for any programmer. The novice programming efforts are pioneering an aesthetic approach, but they need to evolve into more powerful paradigms if they are to be accepted by programming experts, and more widely disseminated. I’ve attempted to demonstrate that a focus on aesthetic design need not compromise this modeling activity; it has the potential to enhance it significantly. It is not clear what computer scientists will adopt which aesthetic styles, and yet recent HCI approaches have shown that catering to the individual is a necessary condition for building usable software that has an edge over the competition. There is always the “Catch 22” syndrome of programmers not building 22
such designs since they were not taught it themselves, but I expect the new game-console generation to break this impasse. Acknowledgments I am indebted to my students and sponsors. I would like to thank students, past and present, for their continued efforts in rube: Robert Cubert, Andrew Reddish, John Hopkins, and Linda Dance. I would also like to thank research sponsors of our modeling and simulation work: Air Force/Rome Laboratory (contract: F30602-98-C-0269) and the Department of the Interior (grant 14-45-0009-1544-154). References [1] Emile Aarts and Jan Korst. Simulated Annealing and Boltzmann Machines: A Stochastic Approach to Combinatorial Optimization and Neural Computing. John Wiley and Sons, 1989. [2] Harold Abelson and Andrea diSessa. Turtle Geometry. MIT Press, 1980. [3] Christoph Adami. Introduction to Artificial Life. Springer Verlag, 1998. [4] James A. Anderson and Edward Rosenfeld. Neurocomputing: Foundations of Research. MIT Press, 1988. [5] Rudolf Arnheim. Visual Thinking. London, Faber and Faber, 1969. [6] Rudolf Arnheim. The Dynamics of Architectural Form. University of California Press, 1977. [7] W. Ross Ashby. An Introduction to Cybernetics. John Wiley and Sons, 1963. [8] Dana H. Ballard. An Introduction to Natural Computation. MIT Press, 1997. [9] Ian G. Barbour. Myths, Models and Paradigms: A Comparative Study in Science and Religion. Harper and Row, 1974. [10] Helen T. Bennett. Sign and De-Sign: Medieval and Modern Semiotics in Umberto Eco’s The Name of the Rose. In M. Thomas Inge, editor, Naming the Rose, pages 119– 129. University Press of Mississippi, 1988. [11] Ludwig von Bertalanffy. General System Theory. George Braziller, New York, 1968.
23
[12] Max Black. Models and Metaphors: Studies in Language and Philosophy. Cornell University Press, 1962. [13] Grady Booch. Object-Oriented Development. IEEE Transactions on Software Engineering, 12(2):211 – 221, February 1986. [14] Grady Booch. Object Oriented Design. Benjamin Cummings, 1991. [15] Grady Booch, James Rumbaugh, and Ivar Jacobson. The Unified Modeling Language User Guide. Addison-Wesley, 1999. [16] Nathaniel S. Borenstein. Programming as if People Mattered. Princeton University Press, 1991. [17] Marc H. Brown. Algorithm Animation. MIT Press, 1987. [18] Stuart K. Card, Jock D. Mackinlay, and George Robertson. A Morphological Analysis of the Design Space of Input Devices. ACM Transactions on Information Systems, 9(2):99–122, April 1991. [19] Stuart K. Card, Jock D. Mackinlay, and Ben Shneiderman. Readings in Information Visualization: Using Vision to Think. Morgan Kaufman, 1999. [20] Mary J. Carruthers. The Book of Memory: A Study of Memory in Medieval Culture. Cambridge University Press, 1990. Cambridge Studies of Medieval Literature, Volume 10. [21] Shi-Kuo Chang, editor. Visual Languages and Visual Programming. Plenum Press, 1990. [22] Cybertown. http://www.cybertown.com. [23] Allen Cypher, Daniel C. Halbert, David Kurlander, and Ellen Cypher, editors. Watch What I Do: Programming by Demonstration. MIT Press, 1993. [24] A. K. Dewdney. The Armchair Universe: An Exploration of Computer Worlds. W. H. Freeman and Company, 1988. [25] Umberto Eco. A Theory of Semiotics. Indiana University Press, Bloomington, Indiana, 1976. [26] Umberto Eco. The Name of the Rose. Harcourt Brace, 1983. [27] Paul A. Fishwick. Heterogeneous Decomposition and Coupling for Combined Modeling. In 1991 Winter Simulation Conference, pages 1199 – 1208, Phoenix, AZ, 1991. [28] Paul A. Fishwick. Simulation Model Design and Execution: Building Digital Worlds. Prentice Hall, 1995. 24
[29] Paul A. Fishwick, James G. Sanderson, and Wilfried F. Wolff. A Multimodeling Basis for Across-Trophic-Level Ecosystem Modeling: The Florida Everglades Example. SCS Transactions on Simulation, 15(2):76–89, June 1998. [30] Paul A. Fishwick and Bernard P. Zeigler. A Multimodel Methodology for Qualitative Model Engineering. ACM Transactions on Modeling and Computer Simulation, 2(1):52– 81, 1992. [31] James D. Foley, Andries van Dam, Steven K. Feiner, and John F. Hughes. Computer Graphics: Principles and Practice. Addison-Wesley, 1990. Second Edition. [32] Carlo Ghezzi and Mehdi Jazayeri. Programming Language Concepts. John Wiley, second edition, 1987. [33] Robert L. Glass. Software Creativity. Prentice Hall, 1995. [34] Hassan Gomaa. Software Design Methods for Concurrent and Real-Time Systems. Addison-Wesley, 1993. [35] David Harel. STATEMATE: A Working Environment for the Development of Complex Reactive Systems. IEEE Transactions on Software Engineering, 16(3):403 – 414, April 1990. [36] Donald Hearn and M. Pauline Baker. Computer Graphics. Prentice Hall, 1994. [37] Mary Hesse. Models and Analogies in Science. University of Notre Dame Press, 1966. [38] Daniel W. Hillis. The Pattern on the Stone: the simple ideas that make computers work. Basic Books, 1998. [39] Tadao Ichikawa, Erland Jungert, and Robert R. Korfhage, editors. Visual Languages and Applications. Plenum Press, New York, 1990. [40] R. E. Kalman, P. L. Falb, and M. A. Arbib. Topics in Mathematical Systems Theory. McGraw-Hill, New York, 1962. [41] Bjorn Kirkerud. Object-Oriented Programming with Simula. Addison-Wesley, 1989. [42] George J. Klir. Architecture of Systems Problem Solving. Plenum Press, 1985. [43] Donald Knuth. The Art of Programming: Volumes 1 to 3. Addison-Wesley, 1968. [44] Donald Knuth. Literate Programming. Stanford University Center for the Study of Language and Information (Lecture Notes, No. 27), 1992. [45] John Koza. Genetic Programming: On The Programming of Computers by Means of Natural Selection. MIT Press, 1992. 25
[46] Craig Larman. Applying UML and Patterns: An Introduction to Object-Oriented Analysis and Design. Prentice Hall, 1998. [47] Henry Lieberman. A Three-Dimensional Representation for Program Execution. In E. P. Glinert, editor, Visual Programming Environments: Applications and Issues. IEEE Press, 1991. [48] Henry Lieberman. Programming by Example. Communications of the ACM, 43(3):73– 74, March 2000. [49] Peter C. Marzio. Rube Goldberg, His Life and Work. Harper and Row, New York, 1973. [50] Malcolm McCullough. Abstracting Craft: The Practiced Digital Hand. MIT Press, 1998. [51] Pierre-Alain Muller. Instant UML. Wrox Press, Ltd., Olton, Birmingham, England, 1997. [52] Marc Najork. Programming in Three Dimensions. Journal of Visual Languages and Computing, 7(2):219–242, June 1996. [53] Jakob Nielsen. Usability Engineering. Morgan Kaufman, 1993. [54] Donald A. Norman. The Design of Everyday Things. Doubleday Books, 1990. [55] nullsoft Winamp. http://www.winamp.com. [56] Takashi Oshiba and Jiro Tanaka. 3D-PP: Visual Programming System with ThreeDimensional Representation. In International Symposium on Future Software Technology (ISFST ’99), pages 61–66, 1999. [57] Louis Padulo and Michael A. Arbib. Systems Theory: A Unified State Space Approach to Continuous and Discrete Systems. W. B. Saunders, Philadelphia, PA, 1974. [58] John F. Pane and Brad A. Myers. Usability issues in the design of novice programming systems. Technical report, Carnegie Mellon University, 1996. Report CMU-CS96-132, http://www.cs.cmu.edu/ pane/ftp/CMU-CS-96-132.pdf. [59] Seymour Papert. Children, Computers and Powerful Ideas. Basic Books, New York, 1980. [60] Richard E. Pattis, Jim Roberts, and Mark Stehlik. Karel the Robot: A Gentle Introduction to the Art of Programming. John Wiley and Sons, 1994. [61] Marian Petre, Alan Blackwell, and Thomas Green. Cognitive Questions in Software Visualization. In John Stasko, John Domingue, Marc H. Brown, and Blaine A. Price, editors, Software Visualization: Programming as a Multimedia Experience, chapter 30. MIT Press, 1998. 26
[62] Roger Pouivet. On the Cognitive Functioning of Aesthetic Emotions. Leonardo, 33(1):49–53, 2000. [63] Jef Raskin. The Humane Interface. Adisson-Wesley, 2000. [64] Mitchel Resnick. Turtles, Termites and Traffic Jams: Exporations in Massively Parallel Microworlds. MIT Press, 1997. [65] Doug Riecken. Personalized Views of Personalization. Communications of the ACM, 43(8):27–28, August 2000. [66] James Rumbaugh, Michael Blaha, William Premerlani, Eddy Frederick, and William Lorenson. Object-Oriented Modeling and Design. Prentice Hall, 1991. [67] R. L. Rutsky. High Techn¯e: Art and Technology from the Machine Aesthetic to the Posthuman. University of Minnesota Press, 1999. [68] Andrew P. Sage. Systems Engineering: Methodology and Applications. IEEE Press, 1977. [69] Sally Shlaer and Stephen J. Mellor. Object-Oriented Systems Analysis: Modeling the World in Data. Prentice Hall, Yourdon Press, 1988. [70] Nan C. Shu. Visual Programming. Van Nostrand Reinhold Company, 1988. [71] Herbert A. Simon. The Sciences of the Artificial. MIT Press, 1969. [72] Stagecast Software. http://www.stagecast.com. [73] Barbara M. Stafford. Visual Analogy: Consciousness as the Art of Connecting. MIT Press, 1999. [74] John Stasko, John Domingue, Marc H. Brown, and Blaine A. Price, editors. Software Visualization: Programming as a Multimedia Experience. MIT Press, 1998. [75] Java Swing. http://java.sun.com. [76] Norbert Wiener. Cybernetics: or Control and Communication in the Animal and the Machine. MIT Press, 1948. [77] William A. Woods. What’s in a Link: Foundations for Semantic Networks. In Daniel Bobrow and Allan Collins, editors, Representation and Understanding. Academic Press, 1975. [78] A. W. Wymore. A Mathematical Theory of Systems Engineering: The Elements. Krieger Publishing Co., 1977. [79] http://www.web3d.org/. 27
[80] Frances A. Yates. Art of Memory. University of Chicago Press, 1974. [81] Bernard P. Zeigler, Herbert Praehofer, and Tag Gon Kim. Theory of Modeling and Simulation: Integrating Discrete Event and Continuous Complex Dynamic Systems. Academic Press, 2nd edition, 2000.
28