A Framework of Syntactic Models for the Implementation of ... - SDML

8 downloads 0 Views 80KB Size Report
two objects are implicitly derived from their relative positions. It is worth noting that the ... it allows to create nested visual languages. .... example Venn diagrams used to depict set relations, such ... The main difference is that the concept of area.
A Framework of Syntactic Models for the Implementation of Visual Languages G. COSTAGLIOLA, A. DE LUCIA*, S. OREFICE**and G. TORTORA Dipartimento di Informatica ed Applicazioni Università di Salerno - 84081 Baronissi, Salerno - ITALY- {gencos, jentor}@dia.unisa.it *Dipartimento di Ingegneria dell'Informazione ed Ingegneria Elettrica - Facoltà di Ingegneria di Benevento Università di Salerno- 82100 Benevento - ITALY - [email protected] ** Dipartimento di Matematica Pura e Applicata Università di L'Aquila - 67100 L'Aquila - ITALY - [email protected]

Abstract In this paper we present a framework of syntactic models for the definition and implementation of visual languages. We analyze a wide range of existing visual languages and, for each of them, we propose a characterization according to a syntactic model. The framework has been implemented in the Visual Language Compiler-Compiler (VLCC) system. VLCC is a practical, flexible and extensible tool for the automatic generation of visual programming environments which allows to implement visual languages once they are modeled according to a syntactic model.

1. Introduction In the last years, much research has focused on formal models to describe the syntax of visual languages. Basically, two approaches for representing visual language sentences have been defined. One of these, the relation-based representation, represents a sentence as a set of graphical objects and a set of relations on them [43, 46]. The other, the attribute-based representation, conceives a sentence as a set of attributed graphical objects [10, 21, 34, 48]. In the first case, a visual sentence is defined by listing all the objects of the sentence and all the relations holding on them; as an example, a symbol r in vertical concatenation with a symbol c is represented as the pair ({r, c}, ver_conc(r, c)). In the second case, all the graphical objects must be listed together with the values of their attributes; as an example, the sentence above will be represented as {r(0, 0), c(0, -1)}, where (0, 0) and (0, -1) are the coordinates of the positions of r and c in the Cartesian plane, respectively. Even though the relation ver_conc is not explicited in the input representation its presence is implicit and can always be verified on the attribute values.

Based on the attribute-based approach, the syntax of visual languages can be characterized depending on the structure of their graphical objects and on the way they can be composed in order to form visual sentences. As an example, an icon-oriented visual language is based on a set of primitive icons, each one characterized by the coordinates of the upper-left and lower-right points of its bounding box, and implicit relations (such as horizontal and vertical concatenation) to spatially arrange those icons and form iconic sentences. In this paper we present a framework of syntactic models for visual languages. For each model we provide its syntactic features by specifying the attributes for a generic language object and the implicit relations that can act on them. It should be clear that syntactic models and visual languages are independent. We show that the same visual language may be syntactically modeled in different ways. The choice of the more appropriate model for a visual language depends on how naturally it describes that language and on the needs of the visual language implementor. The proposed framework allows a hierarchical characterization of visual language environments naturally implemented through the object-oriented technology. This has lead to an easy integration of the syntactic models in the implementation of the Visual Language Compiler-Compiler (VLCC) system, [10]. VLCC is a grammar-based graphical system that inherits, and extends to the visual field, concepts and techniques of compiler generation tools like YACC [27].

2. Visual language syntax A visual language may be conceived as a collection of visual sentences given by graphical objects which are arranged in two or more dimensions. Visual languages are syntactically described by the graphical objects of the language (the vocabulary), the relations used to compose

the sentences, and a set of rules defining the set of visual sentences belonging to the language. The graphical objects of a visual language vocabulary are characterized by a set of attributes. Attributes of a graphical object can be distinguished in graphical attributes, syntactic attributes and semantic attributes. Graphical attributes characterize the image of the object. Typical graphical attributes are position, size, shape, color, name, etc. Syntactic attributes are used to relate graphical objects in order to form visual sentences. A set of graphical objects form a visual sentence once all the syntactic attributes have been instantiated. Syntactic analysis techniques can be used to check whether the syntactic attributes of the graphical objects in a visual sentence are instantiated according to a proper set of relations, i.e., to check whether a visual sentence belongs to a visual language. Semantic attributes are used to associate semantics with a graphical object; they can be used either to translate a visual sentence into a target format in a syntax-direct fashion or to execute a visual sentence. The type of syntactic attributes associated with the graphical objects of a visual language and the type of relations that can be used to compose visual sentences are strongly related and define a syntactic model. As an example, let us consider the visual sentence in Figure 2.1 representing a flowchart. It can be modeled as the interconnection of the graphical objects start, predicate, function, and halt. start predicate function

halt

Figure 2.1 A flowchart.

Each object has a set of pre-defined attaching points as syntactic attributes. The attaching points are connected through polylines which visually depict the control-flow relation among objects. In this case, the semantics associated with the attaching points of graphical objects implicitly defines the direction of the connections. If a, b and c indicate the three interconnections (start, predicate, function), (predicate, function) and (predicate, function), respectively, the syntactic attributes of the objects forming the sentence in Figure 2.1 are instantiated as in Table 2.1. Different syntactic models can be defined varying on the type of syntactic attributes of graphical objects and on the relations used to compose visual sentences. For example, the visual sentence in Figure 2.2 shows the layout of the front page of a two-column paper. This sentence belongs to a visual language whose graphical objects are rectangular boxes. In this case, the coordinates of the upper-left and lower-right points of the rectangle

object name start predicate function halt

attaching point 1 a a b c

attaching attaching point 2 point 3 b c a -

Table 2.1. An alternative representation of the flowchart in Figure 2.1.

can be used as syntactic attributes; they completely give information about size and position of a graphical object. Moreover, geometric relations, such as horizontal and vertical concatenation, and spatial inclusion can be used to compose visual sentences.

Figure 2.2. A sample box sentence.

The two examples shown in Figures 2.1 and 2.2, respectively, reflect the two basic modalities that can be used to compose visual sentences, either connecting or spatially arranging graphical objects. In the first case, the syntactic attributes of graphical objects are instantiated by means of link relations (the connections between them). In the second case the syntactic attributes are automatically instantiated once a graphical object is placed in the Cartesian plane, while the relations between two objects are implicitly derived from their relative positions. It is worth noting that the same visual language can be syntactically modeled in different ways. For example, depending on the type of syntactic attributes and relations considered, the graph shown in Figure 2.3 can be modeled either by explicitly interconnecting nodes and arrows (graphical objects) between them or by spatially arranging them in the space, in such a way the position of an end point of an arrow coincides with the coordinates of a point on the circumference of a node. n1

n2

n4

n3

n5

n6

Figure 2.3. A sample graph.

In the second case, the implicit relations among nodes and arrows are geometrically derived. The choice of the

Visual syntactic model Connection-based Geometric-based Plex Box Iconic Symbolic String

relations

syntactic attributes

any connection

a pre-defined number of sets of attaching points any spatial composition a set of points of the graphical object any connection a pre-defined number of attaching points any spatial composition the upper-left and the lower-right points of the box any spatial concatenation plus the upper-left and the lower-right points of overlapping the icon spatial concatenation the coordinates of the symbol string concatenation the position of the symbol in the string

Table 3.1. Basic classification of syntactic models for the visual languages.

syntactic model used to specify a visual language type depends on its intrinsic characteristics: for example, for graph languages the connection based syntactic model is more natural than the geometric based model. In addition to the syntactic attributes and relations provided by the particular model, in general, each visual language is associated with a special relation named annotate which allows graphical objects to be associated to a visual sentence. As an example, in the flowchart in Figure 2.1 the relation annotate allows to associate each node with a piece of code describing the action or condition to be executed. This relation is different from the inclusion relation since it establishes a relationship which is logical but not spatial and, if used recursively, it allows to create nested visual languages.

Figure 3.1 shows how the syntactic models are hierarchically related. Note the hybrid syntactic model box-and-graph derived by the basic syntactic models Connection-based and Box. Syntactic Models

Connection based

Plex

Geometric based

Box

Iconic

3. Syntactic models for visual languages In this section we give a hierarchy of syntactic models. This hierarchy is not meant to be exhaustive but it is able to provide a wide characterization of existing visual languages. Table 3.1 summarizes the descriptions of the basic syntactic models which are described in this section. Connection-based and geometric-based models reflect the two modalities to compose visual sentences: either connecting or spatially arranging graphical objects. Other basic syntactic models can be derived from this two basic models. The composition of two or more defined models leads to hybrid syntactic models for the characterization of more complex visual languages. In general, a hybrid visual language can be modeled either from a hybrid syntactic model or from a basic syntactic model where the relation annotate associates graphical objects of that language with visual sentences of languages from other models. Examples of hybrid languages are the heterogeneous visual languages, [15], i.e. languages integrating visual and textual notation (like flowcharts annotated with text).

Box-and-Graph

Symbolic

String Figure 3.1. A hierarchy of syntactic models

3.1 The connection-based models Connection-based The visual sentences described by this syntactic model are formed by a set of interconnected objects. The syntactic attributes of a object can be single attaching points or sets of attaching points (attaching areas). The value of an attaching point participating in a connection is given by a unique identifier for that connection. There may be many types of connections among the attaching points of objects and in this paper we report three of them (see Figure 3.2): attaching points can be connected point-to-point (a), or connected by plugging protrusions ("knobs") into correspondingly shaped indentations

("sockets") like for the puzzles (b), or connected by joining them through links (c). The selection of a connection type is done depending on the characteristics of the visual language under study. As an example, edge-labeled graphs and directed graphs may be modeled as sets of nodes and edges where the circumference of a node (attaching area) is connected "point-to-point" to at least one end (attaching point) of an edge. The choice of this representation takes into account the fact that, in these cases, an edge does not only connects nodes but also conveys information about labels and direction. On the other hand, undirected graphs can be modeled either as sets of nodes and edges (graphical objects) interconnected "point-to-point", as in the previous case, or as sets of nodes joined through polylines ("links") representing relationships between nodes. Note that for a visual language, if needed, all the three types of connection may be used, simultaneously.

(a)

(b)

(c)

Figure 3.2.Three types of connections: point-to-point (a), puzzle (b) and link (c).

The connection-based model may be used to describe the syntax of a number of graph languages. Examples include languages based on data flow graphs, state transition diagrams, Petri nets and entity-relationship diagrams. In data flow graph languages the operations are typically associated to boxes and the data flows along the arrows connecting them. In general, graphical objects of such languages may present a variable number of attaching points (attaching area), as in the case of Data Flow Diagrams used for specifying the data flow in software systems, or in the case of PegaSys [35] which is a diagrammatic system designed to provide a programming environment where all steps of software life cycle are supported by graphical interaction languages. Phred [5] is a visual parallel programming environment that allows a software designer to create Phred programs, to statically analyze them for determinacy, and to interpret them. A Phred program is composed of a control flow graph, a data flowgraph, and a set of node interpretations represented by procedure specification (written in C language). Transition diagrams of finite state automata are another typical example of connection-based languages. The graphical objects of such languages are nodes and arcs representing the states and the transitions of the automaton, respectively. The attaching area of a node is

given by its circumference, while an arc has two attaching points corresponding to its end-points. An example of language based on state transition diagrams has been proposed by Jacob [26] for specifying user interfaces in a graphical manner. Many visual programming systems are based on the Petri nets formalism for specifying concurrent and realtime systems [1]. For example, the MOPS-2 system [2] uses coloured Petri nets to allow parallel systems to be constructed and stimulated in a visual manner. The VERDI system [22] uses a form of Petri nets for specifying and simulating distributed systems: the specification is animated by moving tokens around the network. Cabernet [40] is a visual environment based on high-level Petri nets [18] designed to support the specification and verification of real-time systems. Plex The syntactic model Plex is a sub-type of the connection-based model. Here, attaching areas are not allowed. The languages corresponding to this category were first defined by Feder in [16]. The plex model can be used to describe the syntax of visual languages like flowcharts, chemical structures, Boolean and electric circuits, and so on. For instance, in flowchart languages each graphical object (boxes, diamonds,..) has a pre-defined number of attaching points (two for boxes, three for diamonds, ..), and graphical objects can be connected only through links visualized as polylines. Examples of flowchart-based languages include FPL (First Programming Language) [13], particularly suited to help novices in learning programming since it eliminates syntactic errors, OPAL [36] which allows doctors to enter knowledge about cancer treatments into an expert system, GAL [3] and PIGS [42] that use a flowchart variant called Nassi-Shneiderman flowcharts [37], and Pict [19] which uses conventional flowcharts, but is differentiated by its use of colour pictures rather than text inside the flowchart boxes. Also data flow graph languages whose boxes have a fixed number of attaching points can be modeled according to the plex syntax. This is the case, for example, of PROGRAPH [41] which is a structured, functional language introduced to solve comprehension problems of conventional textual functional languages, and InterCONS [45] which is a data flow system developed in Smalltalk that supports programming by example. Nested data flow graph languages can be also modeled according to the plex syntax, where the nesting is obtained by using the relation annotate. Usually, in a nested data-flow graph each node represents an operator (function), an operand (variable), or another directed graph represented by an iconic name. Examples include the programming by example system Fabrik [32] developed in Smalltalk and used for constructing user interfaces and the commercial product Lab-VIEW [38] running on Macintosh and used for controlling external devices. Lab-

VIEW provides procedural abstraction, control structures, and many useful primitive components such as knobs, switches, and mathematical functions. Show and Tell [29, 39] is a visual programming language for defining interfaces of relational databases (schema and queries). A Show and Tell database schema is defined by a twodimensional layout of base boxes, while a query is defined by a data flow program. Other general purpose nested data flow visual programming language systems are Hyperflow [30] and its derivatives ProtoHyperflow [17] and HF/PP [31]. The syntax of these languages consists of boxes and arrows, a box representing a process, and an arrow representing a data flow between processes. Boxes may contain data flowgraphs for defining new functions and to build conditionals. Hyperflow and HF/PP are designed as visual languages for a pen-based multimedia system, while ProtoHyperflow is implemented on a traditional mouse/CRT-based system. Another visual language that can be modeled by plex syntax has been recently proposed by Karsai [28]. It consists of a configurable visual programming environment that can be customized for various application domains. The BLOX languages proposed by Glinert [20] are another example of plex languages. Visual sentences are composed by joining graphical objects using the usual jigsaw-puzzle "lock and key" metaphor to plug protrusions ("knobs") into correspondingly shaped indentations ("sockets") so that the two juxtaposed tiles interlock. Moreover, BLOX substructures can be encapsulated (nested) into BLOX elements. Languages for representing logical circuits and structured programs (in a similar way to nested flowchart, where function boxes may hide sub-flowcharts) have been defined using the BLOX methodology. 3.2 The geometric-based models Geometric-based In this model, a visual sentence is described as a set of graphical objects spatially arranged in the Cartesian plane. The syntactic attributes of a graphical object are the coordinates of a set of representative points on its image. Syntactic attributes are automatically instantiated when placing the graphical object in the Cartesian plane. Several spatial composition rules can be used to form visual sentences. Examples include metrics based relations, such as horizontal and vertical concatenation, spatial inclusion, adjacency and intersection. As an example Venn diagrams used to depict set relations, such as inclusion and intersection, can be modeled according to the geometric-based syntactic model (see Figure 3.3). The geometric-based model can be used to describe high level interface languages for geographical information systems. For example, Cigales [4] is a graphical query language based upon spatial query by example. The basic idea of this language is to express a

query by drawing a pattern according to the user's mental model of the data to be retrieved. The graphical objects of this language are geometric objects, such as lines and areas, while queries (sentences) are composed by means of spatial rules, such as inclusion, intersection, and adjacency.

Figure 3.3. A sample Venn diagram.

Box This is a special case of the geometric-based syntactic model. Here an object must be described by a square or a rectangle. The set of representative points used as syntactic attributes is given by the upper-left and lowerright points of the box. Visual sentences are composed using the same spatial relations as in the geometric-based model. As an example, the syntactic model Box can be used for visual languages describing document layouts (see Figure 2.2), descriptions of structured programs, etc. Iconic This model is another specialization of the geometricbased one. The main difference is that the concept of area is not defined on icons, thus preventing the use of spatial relations, such as intersection and inclusion, in the composition of iconic sentences. The spatial relations used to form iconic sentences are overlapping and geometric based relations. The syntactic attributes of an icon are the coordinates of the upper-left and lower-right points of its bounding box. Figure 3.4 shows two simple iconic sentences: the first one, taken from the Heidelberg Icon Set [44], describes the operation "delete string", while the second one, taken from [12], represents the operation "display a text file".

Figure 3.4. Two iconic sentences.

A number of visual languages have been proposed in the literature which may be syntactically described according to the iconic model. For example, HI-VISUAL [25] provides an iconic framework to construct data flow programs by composing icons in a two-dimensional display screen according to pre-defined rules. Media Streams [14] is an iconic language system which enables

users to create multi-layered iconic annotations of streams video data. The SIL-ICON [6] compiler is a system for specifying, interpreting and prototyping iconoriented systems. It uses context-free grammar augmented with spatial operators to define a visual language and allow iconic sentences to be constructed by arranging icons in a two-dimensional fashion and parsed and interpreted according to the specification rules of the language. Minspeak [7] is an iconic language system used in augmentative communication by people with speech disabilities. It consists of multi-meaning onedimension iconic sentences which are used to retrieve messages (words or word sequences) stored in the memory of a microcomputer. A built-in speech synthesizer allows to generate the voice output. Symbolic The symbolic model is a specialization of the iconic one. The only syntactic attributes of a symbol are the coordinates of its position in the space, i.e., the coordinates of a representative point, such as its centroid. The spatial relations that can be used to compose symbolic sentences are the same as in the case of iconic sentences, with the exception of overlapping. As an example, some symbolic languages for two-dimensional arithmetical expressions can be found in [9, 10]. String This model directly derives from the symbolic one. Here a graphical object is a character whose only syntactic attribute is its position in a one dimensional space (string). The only relation is string concatenation. This syntactic model is suitable for all the string languages. 3.3 The hybrid models Box-and-graph Box-and-graph is an example of hybrid syntactic model obtained by grouping features of the box and connection-based models. A graphical object has both the upper-left and lower-right points of its bounding box and a pre-defined number of attaching areas and/or attaching points as syntactic attributes. The relations that can be composed to form visual sentences are both geometric (spatial inclusion, horizontal and vertical concatenation, etc.) and connection based. Statecharts [23, 24] are an example of language that can be modeled as box-andgraph. Statecharts are an extension of state transition diagrams introduced to specify large and complex reactive systems, that is, event-driven continuously reacting to stimuli systems. The main features of statechart transition diagrams are AND/OR decomposition of states, inter-level transitions, and a broadcast mechanism for communication between

concurrent components. Graphical objects representing states or AND/OR decomposition of states may contain nested statecharts. Examples of visual languages based on statechart formalism include Miro [33] which is a language for defining security constraints (in particular user accesses to files) in operating systems, and StateMaster [47] which is a visual system for programming graphical user interfaces.

4. Conclusions We have presented a framework of syntactic models for visual languages and provided a wide characterization of existing visual languages by modeling each language into what we feel be the more appropriate syntactic model. The proposed framework underlies a hierarchical characterization of visual language environments which has guided the object-oriented implementation of the Visual Language Compiler-Compiler (VLCC) system, [10]. VLCC is a tool for the automatic generation of visual programming environments implementing visual languages. A prototype of the VLCC system has been implemented using object-oriented technology under MSWindow™3.x and MSWindows95/NT™. VLCC is able to implement a visual language according to its syntactic model. Thanks to the modular structure of the system together with its object-oriented architecture, it is possible to easily implement new syntactic models. Figures 4.1 (a), (b) and (c) show three VLCC generated visual programming environments implementing, respectively, the visual languages boxes (box), semi structured flow charts (plex) and entityrelationship (connection-based). The last two languages are actually hybrid languages since they allow a user to annotate an object on the screen with string sentences. This is done by selecting the TEXT button and clicking on the chosen graphical token. This action launches a text editor for the insertion of a string sentence (see Figure 4.1 (b)). In the figures, the results of the compile processes are also shown. In (a), a set of interrelated boxes are recognized as a sentence of a simple box language; in (b) a semi-structured flowchart is translated into Pascal-like code; finally, in (c) and (d), an entity relationship diagram and its translation into SQL code are shown. More details on VLCC tool and the visual languages generated can be found at the WWW site: http://www.unisa.it/gencos.dir/vlcc.htm As further research, we are focusing on the definition and implementation in VLCC of syntactic models for the description of dynamic visual languages, [8], where graphical object attributes for handling temporal information need to be introduced.

(a)

(b)

(c)

(d)

Figure 4.1.

VLCC-generated Visual Programming Environments.

References [1] T.Ae and R. Aibara, "A rapid prototyping of real-time software usingpetri nets", Proceedings of IEEE Workshop on Visual Languages, Linkoping, Sweden, IEEE CS Press, 1987, pp. 234-241. [2] T. Ae, M. Yamashita, W.C. Cunha, and H. Matsumoto, "Visual user-interface of a programming system: MOPS2", Proceedings of IEEE Workshop on Visual Languages, Dallas, Texas, IEEE CS Press, 1986, pp. 4453. [3] M.B. Albizuri-Romero, "GRASE -- A graphical syntax directed editor for structured programming", ACM SIGPLAN Notices, vol. 19, 1984, pp. 28-37. [4] M.A: Aufaure-Portier, "A high level interface language for GIS", Journal of Visual Languages and Computing, vol. 6, 1995, pp. 167-182. [5] A. Beguelin and G. Nutt, "Visual parallel programming and determinacy: a language specification, an analysis technique, and a programming tool", Journal of Parallel and Distributed Computing, 1994. [6] S.K. Chang, M.J. Tauber, B. Yu, and J.S. Yu, "A visual language compiler", IEEE Transactions on Software Engineering, vol. 15, no. 5, May 1989, pp. 506-525. [7] S.K. Chang, G. Costagliola, S. Orefice, G. Polese, and B.R. Baker, "A Methodology for Iconic Language Design with Application to Augmentative Communication", Proceedings of IEEE Workshop on Visual Languages, IEEE CS Press, 1992, pp. 110-116.

[8] S.K. Chang, "Dynamic Visual Languages", Proceedings of 1996 IEEE Symposium on Visual Languages, pp. 308-315. [9] G. Costagliola, S.K. Chang, "DR PARSERS: a generalization of LR Parsers", Proceedings of IEEE Workshop on Visual Languages, IEEE CS Press, 1990, pp. 174-180. [10] G. Costagliola, G. Tortora, S. Orefice, A De Lucia, "Automatic generation of visual programming environments", IEEE Computer, vol. 28, no. 3, March 1995, pp. 56-66. [11] G. Costagliola, G. Tortora, S. Orefice, A De Lucia, "A parsing methodology for the implementation of visual systems", Technical Report 1/95, Dep. of "Informatica ed Applicazioni", University of Salerno, 1995 (available from the authors). [12] C. Crimi, A. Guercio, G. Pacini, G. Tortora, and M. Tucci, "Automating visual language generation", IEEE Transactions on Software Engineering, vol. 16, no. 10, October 1990, pp. 1122-1135. [13] N. Cunniff, R.P. Taylor, and J.B. Black, "Does programming language affect the type of conceptual bugs in begineers' programs ? A comparison of APL and Pascal", Proceedings of SIGCHI '86: Human Factors in Computing Systems, Boston MA, 1986, pp. 175-182. [14] M. Davis, "Media Streams: An Iconic Visual Language for Video Annotation", Proceedings of IEEE Symposium on Visual Languages, IEEE CS Press, 1993, pp. 196-201.

[15]

M. Erwig and V. Meyer, "Heterogeneous Visual Languages - Integrating Visual and Textual Programming", V. Haarslev Editor, Proceedings IEEE Symposium on Visual Languages, Sept. 1995. [16] J. Feder, "Plex languages", Information Science, vol. 3, 1971, pp. 225-241. [17] A.S. Fukunaga, T.D. Kimura, W. Pree, "ObjectOriented Development of a Data Flow Visual Language System", Proceedings of IEEE Symposium on Visual Languages, IEEE CS Press, 1993, pp. 134-141. [18] C. Ghezzi, D. Mandrioli, S. Morasca, and M. Pezzé, "A unified high-level Petri net formalism for timecritical systems", IEEE Transactions on Software Engineering, vol. SE-17, no. 2, February 1991. [19] E.P. Glinert and L. Tanimoto, "Pict: an interactive graphical programming environment", IEEE Computer, vol. 17, 1984, pp. 7-25. [20] E. Glinert, "Towards second generation interactive graphical programming environments", Proceedings of IEEE Workshop on Visual Languages, IEEE CS Press, 1986, pp. 61-70. [21] E.J. Golin and S.P. Reiss, "The Specification of Visual Language Syntax", IEEE Workshop on Visual Languages, IEEE Comp. Soc. Press, 1989, pp. 105-110. [22] M. Graf, "A visual environment for the design of distributed systems", Proceedings of IEEE Workshop on Visual Languages, Linkoping, Sweden, IEEE CS Press, 1987, pp. 330-344. [23] D. Harel, "Statecharts: a visual formalism for complex system", Science of Computer Programming, vol. 8, 1987, pp. 231-274. [24] D. Harel, "On visual formalisms", Communications of the ACM, vol. 31, no. 5, May 1988, pp. 514-530. [25] M. Hirakawa, M. Tanaka, and T. Hichikawa, "An iconic programming system, HI-VISUAL", IEEE Transactions on Software Engineering, vol. 16, no. 10, October 1990, pp. 1178-1184. [26] R.J.K. Jacob, "A state transition diagram language for visual programming", IEEE Computer, vol. 18, 1985, pp. 51-59. [27] S. C. Johnson, "YACC: Yet Another Compiler Compiler ", Bell Laboratories, Murray Hills, NJ. [28] G. Karsai. "A configurable visual programming environment: a tool for domain-specific programming", IEEE Computer, vol. 28, no. 3, March 1995, pp. 36-44. [29] T.D. Kimura, J.W. Choi, and J.M. Mack, "A visual language for keyboard-less programming", Technical report WUCS-86-6, Department of Computer Science, Washington, University, St. Louis, 1986. [30] T.D. Kimura, "HyperFlow: a visual programming language for pen computers", Proceedings of IEEE Workshop on Visual Languages, Seattle, Washington, IEEE CS Press, 1992, pp. 125-132. [31] T.D. Kimura, A. Apte, S. Sengupta, and J.W. Chan, "Form/Formula: a visual programming paradigm for user-definable user interfaces", IEEE Computer, vol. 28, no. 3, March 1995, pp. 27-35. [32] F. Ludolph, Y.Y. Chow, D. Ingalls, S. Wallace, and K. Doyle, "The Fabrik Programming Environment", Proceedings of IEEE Workshop on Visual Languages, Pittsburgh, PA, IEEE CS Press, 1988, pp. 222-230. [33] M.W. Maimone, J.D. Tygar, and J.M. Wing, "Miro semantics for security", Proceedings of IEEE Workshop on Visual Languages, Pittsburgh, PA, IEEE CS Press, 1988, pp. 45-51. [34] K. Marriott, “Constraint Multiset Grammars”, in Procs. of the 1994 IEEE Symposium on Visual Languages, IEEE Comp Soc Press,1994,pp118-125.

[35]

M. Moriconi, D.F. Hare, "Visualizing Program Design through PegaSys", IEEE Computer, vol.18, no. 8, Aug. 1985. [36] M.A. Musen, L.M. Fagen, and E.H. Shortliffe, "Graphical specification of procedural knowledge for an expert system", Proceedings of IEEE Workshop on Visual Languages, Dallas,Texas,1986,pp167-178. [37] I. Nassi and B. Shneiderman, "Flowchart techniques for structured programming", ACM SIGPLAN Notices, vol. 8, 1973, pp. 12-26. [38] National Instruments, LabVIEW, 12109 Technology Blv, Austin, Texas, 78727. [39] M. Najork and E. Golin, "Enhancing Show-and-Tell with a polymorphic type system and higher order functions", Proceedings of IEEE Workshop on Visual Languages, Skokie, Illinois, 1990. [40] C. Ghezzi, M. Pezze', "Cabernet: An Environment for the Specification and Verification of Real-Time Systems", 1992 DECUS Europe Symposium,Cannes (France), 7-11 September 1992. [41] T. Pietrzykowsky, S. Matwin, and T. Muldner, "The programming language PROGRAPH: yet another application of graphics", Proceedings of Graphics Interface '83, Edmonton, Alberta, 1983, pp. 143-145. [42] M.C. Pong and N. Ng, "Pigs -- A system for programming with interactive graphical support", Software Practice and Experience, vol. 13, 1983, pp. 847-855. [43] J. Rekers and A. Schurr, "A graph grammar approach to graphical parsing", in Procs. of the 1995 IEEE Symposium on Visual Languages, IEEE Comp. Soc. Press, 1995, pp. 195-202. [44] G. Rohr, "Using Visual Concepts", in Visual Languages, S. K. Chang, T. Ichikawa, and P. Ligomenides, Eds. New York: Plenum, 1986. [45] D.N. Smith, "Visual Programming in the Interface Construction", Proceedings of IEEE Workshop on Visual Languages, Pittsburgh, PA, IEEE CS Press, 1988, pp. 109-120. [46] M. Tucci, G. Vitiello, G. Costagliola, "Parsing Nonlinear Languages", IEEE Transactions on Software.Engineering, vol.20,no.9, September 1994. [47] P.D. Wellner, "StateMaster: a UIMS based on statecharts for prototyping and target implementation", Proceedings of SIGCHI '89: Human Factors in Computing Systems, Austin, Texas, 1989, pp. 177182. [48] K. Wittenburg, "Earley-style parsing for Relational Grammars", in Procs. of the 1992 IEEE Workshop on Visual Languages, 1992, pp. 192-199.

Suggest Documents