Use of attribute grammars in compiler construction

13 downloads 0 Views 1MB Size Report
Although a C compiler and an. ALGOL 60 compiler will use the same module, the module's operations will be invoked in completely different orders because the ...
Use of Attribute Grammars in Compiler Construction IV. M. Waite Department of Electrical and Computer Engineering University of Colorado Boulder, CO 80309-0425 USA

ABSTRACT: Attribute grammars were initially proposed as a tool for describing language semantics. Despite many years of development, however, they have had little impact upon practical compiler construction. Clearly attribute grammars are not the panacea claimed by some of their devotees; this paper considers their actual role in compiler construction.

1. Introduction Attribute grammars have their origin in the so-called "syntax-directed" compiler technology of the early '60s. 1'2 This technology used only synthesized attributes. In 1968,3 Knuth pointed out that certain properties of a language could be described more easily if inherited attributes were also allowed. He showed that this approach did not provide any increase in power, but merely simplified the description and made it more intuitive. Although descriptions may be simplified by using inherited attributes, it is also possible to use them to write descriptions of attributions that are not computable. Knuth stated an algorithm for testing a description to see whether the specified attributes can be effectively computed. Later research showed that Knuth's algorithm described a sufficient condition for effective computability, but not a necessary one." The test for effective computability has been proven to have exponential complexity,5 so most further work in attribute grammars has concentrated on various sub-classes of the grammars whose attributes are effectively computable. From the beginning, attribute grammars have been touted as methods for specifying compilers. A tremendous amount of work has been devoted to understanding their properties, developing analyzers for them, and using them to describe translations. Virtually all of this effort, however, has been "academic"; people who really write compilers have been singularly unimpressed with attribute grammars. Why? What is the real role of attribute grammars in compiler construction? To answer that question, it is necessary to think about the automation of programming in general and the solution to the compilation problem in particular. One of the most difficult things for a person interested in a particular technology to do is to view that technology in a larger context. The larger context for attribute grammars is

256 program construction, so Section 2 considers the characteristics of that problem. In Section 3, the general discussion is sharpened to compiler construction. Section 4 shows where the concept of attribute grammars fits into thc problem of compiler construction, and Section 5 draws some conclusions.

2. Program Construction There are four fundamental steps that one must go through when solving a problem with a computer:

1)

Understand the problem. This step may result in some formal description of the problem, or simply a description in plain language that can be agreed upon by the person who wants the problem solved (the customer) and the person who intends to solve it (the designer).

Have an idea. The designer ponders the problem and setdes upon the "shape" of a solution p the approach to be taken, The idea may be rough, but all of the elements of the solution are sketched. 3) Prove that the idea solves the problem. The designer reasons more or less formally about the idea, developing an algorithm that is guaranteed to produce the correct result when the input data satisfies certain constraints. 4) Code the proven solution. The designer expresses the algorithm in a form acceptable to the computer on which it is to bc executed. Every one of these steps is required for every problem solved, although the effort devoted to them varies. Often they are iterated because the customer does not have a full understanding of the problem, or the designer's idea proves unworkable due to performance problems. Complete solutions to significant problems are far too complex to be obtained in one step. Therefore the "solution" is likely to be stated in terms of a collection of simpler problems. The four steps must then be carried out for each of thcse problems in turn. Usually step 1 is effectively dealt with when the problem is posed, by stating a predicate that must be satisfied by the solution and a predicate that defines the constraints on the input. These predicates are dictated by the characteristics of the larger problem of which the new problem is a component. As the field matures, we note that certain problems keep appearing over and over again. At first such problems are solved in isolation, but gradually we build a base of shared experience that indicates how these common problems should be attacked. For example, very few cxpcrienccd designers would bother to create an elementary function routine6 or implement a basic data structure 7 from first principles today. Studies have shown that the only significant measure of experience in programmers is the number of problem/solution pairs they have internalized. 8 Thus, when experienced programmers have understood the problem posed by the customer, they can often short-circuit steps 2 and 3 by simply dredging up a "standard" solution. Problem solution can be automated by encoding our experience in tools and libraries. The goal is to be able to recognize that the result of step 1 is a description of a particular instance of some standard problem. Steps 2-4 can be performed by the computer itself in that case: Since the problem is a standard one, a library of proven ideas is available. These ideas arc already embodied in algorithms coded for the desired computer. Standard problems differ in the variability of their instances. Some, like the elementary functions of mathematics, are defined completely. In that case, a single library routine for

2)

257

each of the machine-defined arithmetic data types is sufficient. Others, like sorting problems, are solved by generators that fix certain data or decisions in skeleton routines. The fleshedout skeleton then becomes the solution. Still others, like database query and update, require operations tailored to the problem instance. Their instances are described by special-purpose languages, and the descriptions are compiled into an appropriate solution module. In order to automate the solution of a standard problem, we must provide a formal descriptive mechanism with the following properties: 1)

It must be unambiguous.

2)

It must be able to express exactly those problem instances for which we can generate a solution.

3)

The formal descriptions must be no more complex than the description of an ad-hoc solution in a programming language.

The description sqrt(expression ) certainly has these three properties. To satisfy (1), we must specify the type(s) allowed for expression and the fact that the principal square root will be returned. (2) is also satisfied by fixing the type and giving the range of values that type represents. Finally, (3) is satisfied because even the shortest square root routine cannot be expressed in six characters! The formal descriptive mechanism must be associated with some processor that carries out steps 2-4 above. In the case of sqrt(expression), that processor is actually a combination of a compiler and a library mechanism. More complex formal descriptive mechanisms, like data definition languages, require more complex processors; the principle, however, is the same.

3. Compiler Construction Table 1 shows a typical decomposition of the compilation problem. The three main components listed in the first column are concerned with determining the relationship among the parts of the program as it was written (structuring), verifying the consistency of those parts and rendering them in terms of target machine concepts (translation), and expressing the target machine concepts as a program that can be executed (encoding). Each of these main components is divided into subtasks, and the subtasks are, in turn, divided into individual problems. Each of the problems in Table 1 is relatively well-understood. Formal descriptive mechanisms exist for some, and some have been automated. As a typical example, consider parsing - - the problem of determining the phrase structure of the sequence of basic symbols making up the source program. (The sequence of basic symbols is obtained from the original input by the lexical analysis subtask.) Context-free grammars are used to describe the phrase structure formally. A parser generator checks whether the given grammar satisfies constraints that guarantee it to be unambiguous and to describe a phrase structure for which a parser can be generated. If the constraints are satisfied, then the generator actually creates a parser in some programming language. The solutions to other problems, like name analysis, are more difficult to automate. Name analysis is the problem of associating a database key with every identifier occurrence in the source program. An applied occurrence of an identifier should be associated with the same key as the defining occurrence of that identifier. All of the information embodied in the definition of the identifier can then be stored in the database using this key, and retrieved at each applied occurrence via the same key.

258

Table 1 Decomposition of the Compiler Construction Problem Scanning Conversion Parsing Tree construction Name analysis Type analysis Data mapping Action mapping Execution-order determination Register allocation Instruction selection Internal address resolution External address resolution Instruction encoding

Lexical analysis Structuring Syntactic analysis Semantic analysis Translation Transformation Code generation Encoding Assembly

Name analysis for several languages can be described by an abstract data type called an Environment that has five operations: 9 •

N e w E n v ( ) : E n v i r o n m e n i . Create a new environment unrelated to any other.



N e w S c o p e (Environment ):Environment

. Create a new environment nested within

another environment. . Define an identifier in a specified environment, signalling an attempted multiple definition.



Addldn(Environment,ldentiJ~er,Key):boolean



KeylnEnv(Environment,ldentifier):Key

.

Obtain the key for an identifier in a

specified nest of environments. •

KeylnScope(Environment,ldentifier):Key

. Obtain the key for an identifier in a

specified environment. Here each value of type Environment represents a distinct scope. K e y l n E n v first searches the scope represented by its argument. If the specified identifier is not bound in that scope, its parent is searched, and so forth. Thus KeylnEnv implements an environment in the sense of denotational semantics.10 K e y l n S c o p e , on the other hand, confines its search to the scope represented by its argument. Both operations return a distinguished key, N o D e f , if they are unable to find a binding for the given identifier. The Environment abstract data type can easily be implemented by a standard module,11 but this does not solve the name analysis problem. Although a C compiler and an ALGOL 60 compiler will use the same module, the module's operations will be invoked in completely different orders because the scope rules of the two languages differ. Thus a formal specification of the name analysis problem must embody both the abstract data type and a

259 definition of the relationship among the invocations of its operators. It is in the latter role that attribute grammars find their application.

4. The Role of Attribute Grammars Many of the problems in the third column of Table I involve processing of contextual information. There are three aspects to thc solution of such problems: I)

H o w to compute specific values from other values.

2)

H o w to perform the computations in the right order.

3)

H o w to store the contextual information used in the computations.

In order to automate the solution of a contextual information processing problem, we need formal mechanisms from which all three aspects can be deduced. The firstaspect of the solution to a contextual information processing problem is familiar to all of us. A function is a formal specification of how to compute specific values from other values, and we have various techniques for describing functions. As has already been pointed out, functions that solve common problems can be coded and placed in a library, generated from skeletons, or created from a description of a problem instance. In the name analysis example, the firstaspect of the solution is described by the functions (Addldn, KeylnEnv, etc.) exported by the Environment abstract data type. Computation order and intermediate storage must be deduced from the relationships among computations rather than from properties of the computations themselves. The term "programming in the large" has often been applied to the process of specifying such relationships.-12 Because the computations are associated with the nodes of a tree, an attribute grammar can be used to specify the relationships among them. The role of an attribute grammar in compiler construction is therefore to provide a formal descriptive mechanism for specifying the relationship among computations associated with the components of a program. Using this specification, we intend to automate the solutions of contextual information processing problems. By defining the role of attribute grammars in compiler construction, we fix the characteristics that are of interest to compiler designers. We also highlight the position of the attribute grammar in the overall solution of the compilation problem, thus avoiding the temptation to use this particular formalism inappropriately. Notice that we are only concerned with the role of attribute grarnmars in compiler construction here; aspects of the formalism that are uninteresting to compiler designers may well prove crucial to other applications. Remember that a formal descriptive mechanism must be unambiguous, must describe exactly those problem instances whose solutions can be automated, and must be simpler than an ad-hoc description of the solution. Thus, to be useful for compiler designers, an attribute grammar formalism must be able to unambiguously describe the relationship among computations of contextual information, it must be possible to deduce execution order and intermediate storage requirements automatically, and the description must be simpler than a description using a programming language. We can apply these requirements to various existing attribute grammar formalisms to see whether any are really adequate and, if not, to highlight the areas needing improvement. It turns out that one of the most crucial points governing the acceptance (or nonacceptance) of attribute grammars as a specification mechanism is their complexity relative to descriptions using a programming language. Therefore Section 4.1 briefly reviews descriptions that use programming languages. Section 4.2 summarizes the requirements placed on

260 an attribute grammar by the compiler construction problem, and ways in which the specification can be processed are discussed in Section 4.3.

4.1. Ad-Hoc Contextual Information Processing Suppose that the structuring task of Table 1 creates an abstract syntax tree for the complete input text.10 Many existing compilers attempt to process all contextual information during parsing, thus avoiding the need to construct a tree. 13 For most languages, however, havhag the complete tree simplifies the compiler and allows it to produce better code. Given the memory capacity of current computers, there seems to be little incentive to avoid tree construction. In order to describe the contextual information processing in a programming language, the compiler designer writes a collection of mutually-recursive procedures. Each procedure invokes functions like the ones of the Environment abstract data type to compute specific values from other values. The computed values may be stored in global variables, in local variables or parameters of the recursive procedures, in the tree itself or in a central database. When a computation requires values dependent upon descendants of the current node, the same procedure or another in the collection is invoked. For example, consider a procedure that processes an applied occurrence of an identifier. A code specifying the particular identifier represented by the corresponding tree leaf would have been stored at that leaf as the tree was being built by the structuring task. The procedure might therefore apply KeylnEnv to the Environment value held in a global variable and the code stored at the leaf. KeylnEnv will return a key, which is then stored at the leaf. Note that this operation can be expressed by a single assignment, provided that a pointer to the leaf is given as a parameter to the processing procedure. What about the global variable holding the Environment value? Suppose that its name is CurrentEnv. The processing procedure for a compound statement simply executes the following sequence on entry: SaveEnv = CurrentEnv ; CurrentEnv = NewScope (CurrentEnv ); (Here SaveEnv is a local variable of the compound statement processing procedure.) It then invokes other processing procedures to deal with the components of the compound statement. These procedures invoke Addldn and KeylnEnv, associating keys with defining and applied occurrences of identifiers respectively. Finally, just before terminating, the compound statement processor restores CurrentEnv from SaveEnv. This strategy for describing contextual information processing is simple and intuitive. (For extensive examples of its use, see reference 14.) Global variables allow for easy dissemination of information over wide areas of the tree, local variables separate information only relevant within a local set of computations. Values placed in the Fee are associated with specific contexts and can be accessed each time that context is processed. The central database, accessed via keys, makes information associated with specific entities available to all contexts in which those entities appear.

4.2. Requirements Imposed by Compiler Construction An attribute grammar is often defined as a 4-tuple AG =(G, A , R, B ).9 G is a reduced context-free grammar, A is a finite set of attributes, R is a finite set of attribution rules, and B is a finite set of conditions. Attributes are associated with vocabulary symbols of the grammar, while attribution rules and conditions are associated with productions. An attribution

261

rule defines an attribute of some symbol in the production as a function of some set of attributes of symbols in the same production. A condition is a boolean function of some set of attributes of symbols in a production. Experience has shown that this definition of an attribute grammar must be augmented if it is to be of practical use in compiler construction. The problem is that it describes only relationships between computations associated with adjacent nodes - - children and their parents or children of the same node. "'Long-distance'" relationships are quite common among computations that process contextual information for a compiler, however. While such relationships can be expressed by passing attributes between neighbor nodes, such a solution greatly increases the complexity of the specification. This increase in complexity is intolerable because it makes the attribute grammar more difficult to write and understand than the ad-hoc description discussed in Section 4.1. The designers of the GAG system, 15 an early attribute grammar processor, recognized this problem and provided one notation for directly accessing an attribute in an ancestor of the current node and one for directly accessing attributes in a set of descendants. These notations have proven adequate for dependence based on the tree structure of a program. Some languages, such as C, demand computations that are related on the basis of the linear source text o ~ e r . These relationships also cover long distances, and can be expressed by "threadrag" attributes" through the grammar. 16 Again the effect is a dramatic increase in complexity,, ivand again the solution is a special notation to provide direct access to the desired attribute. As an example of the use of long-distance relationships, consider the name analysis problem for ALGOL 60 and C compilers. In ALGOL 60, the scope of a variable identifier declaration is the block in which it appears; in (2, the scope begins at the defining occurrence and continues to the end of the compound statement. Thus each ALGOL 60 block constitutes a new scope, as does each C compound statement. An applied occurrence of an identifier should be sought in the current scope, using the KeylnEnv operation of the standard environment module. Figure 2 shows attribution rules describing a relationship between the invocation of the

NewScope operation that creates a scope for an ALGOL 60 block and the KeyInEnv operations

that

seek

identifier

bindings

in

it.

In

the

first

rule,

TNCLUDING

CompleteEnv refers to the CompleteEnv attribute of the S t a t e m e n t node's closest B l o c k ancestor. Similarly, the environment passed to KeylnEnv in the last rule is Block.

the environment of the closest B l o c k ancestor of the identifier.

Addldn has a side effect on its environment argument. Such side effects induce relationships among computations that are not easily expressed with data dependence. They are dependence relations, but they are not visible in the usual way. There arc two basic approaches to making these dependence relations visible: •

Define attributes that do not represent values, but simply carry the fact that a particular computation has been done.



Relate computations to one another within a single attribution rule so that the availability of the attribute being defined reflects the fact that all computations in a certain set have been carried out.

Both approaches are needed; they are useful in slightly different contexts. Neither really distorts the attribute grammar formalism. A "dependence" attribute is treated just like any other attribute, except that no storage is required for it. Figure 2 shows how the normal identifier binding mechanism used in many functional languages suffices to relate

262

RULE Inner: Statement : := B l o c k STATIC Block. InitialEnv: = N e w S c o p e (INCLUDING Block. CompleteEnv) ; END; R U L E Scope : B l o c k : := 'begin' D e c l a r a t i o n s Statements "end" STATIC Block. CompleteEnv: =LET 1 : (CONSTITUENTS IdnDef. key) END;

IN Block. InitialEnv;

RULE Applied: IdnUse : := I d e n t i f i e r STATIC IdnUse. key: = K e y I n E n v (INCLUDING Block. CompleteEnv, Identifier. symbol ) ; END;

Figure 2 Long-Distance Relationships in ALGOL 60

computations. Block. Complet eEnv in Figure 2 is an environment in which all local identifiers are guaranteed to have associated keys. The actual value of this attribute is identical to the value of B l o c k . I n i t i a l E n v . By definition, L E T i : e 1 I N e 2 is evaluated by first evaluating e 1, then evaluating e 2 after replacing every occurrence of i in e 2 by the value of e 1. CONSTITUENTS IdnDef. key is a list of all of the values of IdnDef. key attribute values in the subtree rooted in the rule. Thus the second rule describes a relationship between the local declarations and the complete environment. This relationship does not require that the local declarations be entered into the environment in any particular order, merely that they all be entered before the attribute B l o c k . C o m p l e t e E n v is defined.

A specification analogous to Figure 2, but describing C scope rules, appears in Figure 3. Here CHAINSTART initiates a "threaded" attribution that passes through all descendants of the C o m p o u n d S t m t rule. Any reference to C u r r e n t E n v in a descendant accesses the threaded attribute. If the reference appears on the right-hand side of a rule, the value is obtained; if it appears on the left-hand side the value is set. Rules that do not actually use the threaded value do not need to mention it.

4.3. Attribute Grammar Processing Whatever the specific notation used for the attribute grammar, it must be possible to deduce two things: the order in which computations should be carried out, and what storage classes should be used for the results. These two deductions, in conjunction with the specification inherent in the functional notation used for computations, allows a tool to build the set of re.cursive routines described in Section 4.1. This set of routines is sometimes

283

RULE CompoundStmt: Statement ::= D e c l a r a t i o n s Statements STATIC C H A I N S T A R T CurrentEnv; Declarations.CurrentEnv:=NewScope(Statement); END; RULE Applied: IdnUse ::= Identifier STATIC IdnUse.key:=KeyInEnv(CurrentEnv, Identifier.symbol); END;

Figure 3 Long-Distance Relationships in C

termed a ' 'treewalk automaton". 18 The attribute grammar processor must examine the attributions to determine dependences. It might generate new attributes to express long-distance relationships, and modify the text of the attribution rules to reflect them. Standard analysis techniques can then be used to determine the evaluation order. 9 The evaluation order must be fixed by the grammar itself, with no dependence on the orogram being compiled, 19 so that it is possible to discover the lifetimes of all attributes. 20'2! On the basis of the lifetime analysis, the processor can decide which attributes must be stored in the tree, which can be local variables or parameters of the recursive routines, and which can be global variables. In order to produce a set of recursive routines comparable to the ones that would appear in a hand-coded compiler, the processor must do dead variable analysis after the execution order has been determined. For example, the lists produced by the CONSTITUENTS operations in Figure 2 are never actually used. They represent long-distance relationships arising from side effects, and therefore influence the execution order. There is no need, however, to actually compute them. Thus the computations can be omitted from the generated routines, along with the storage required to hold intermediate values. 5. Conclusion Attribute grammars have been associated with compiler construction from the very beginning. They cannot, however, be regarded as a "complete solution" to the compiler construction problem. Their role is as a formal mechanism for describing relationships among the computations used to manipulate contextual information. Once this role is understood, we can state requirements for useful attribute grammar notations and evaluate existing systems according to those requirements. We can answer the question "Why don't compiler writers use attribute grammars?". Suppose that a compiler designer were to write an ALGOL 60 or C compiler by hand as outlined in Section 4.2. The compiler would contain assignment statements virtually

264 identical to the attribute relations appearing in Figure 2 or Figure 3. In fact, those in ALGOL 60 compiler would be simpler than the relationships in Figure 2 because only data flow dependence need be explicit. Dependence due to side effects are reflected in execution order of the code. Therefore attribute grammars do not reduce the size of specification.

the the the the

The advantage for attribute grammars is that they allow the designer to avoid specifying the execution order and deciding how to store intermediate information. Because these properties of the implementation are deduced from the specified relationships, it becomes easier to modify the design: Only the relationships need be changed; there is no need to reorder computations or alter variable declarations. These advantages accrue, however, only when attribute grammar processors generating appropriate code are available. Since attribute grammars are being used merely to define relationships among computations, the code they generate must interact smoothly with code produced by other tools or by hand. Compiler construction systems that utilize attribute grammars in the role described in this paper are currently available.22 The performance of the compilers they produce is comparable to that of hand-written compilers,23 and the total specification size is considerably less than the size of the equivalent hand-written compiler. Nevertheless, the competition represented by the design techniques sketched in Section 4.2 is very strong. In order to make attribute grammars a viable alternative, further work is needed to generate parse-time attribution, 20 and to provide a more object-oriented specification language. The primary issue in such a specification language will be to reduce complexity by associating computations with nonterrninals and attributes with rules. 24 References 1.

E.T. Irons, 'A Syntax-Directed Compiler For ALGOL 60', Communications of the

ACM, 4, 51-55 (January 1961). 2.

E . T . Irons, 'Towards More Versatile Mechanical Translators', in Experimental Arithmetic, High Speed Computing and Mathematics, vol. 15, American Mathematical

3.

D.E. Knuth, 'Semantics of Context-Free Languages', Mathematical Systems Theory, 2, 127-146 (June 1968).

4.

D.E.

Society, Providence, RI, 1963.

Knuth, 'Semantics of Context-free Languages: Correction', Mathematical

Systems Theory, 5, 95-96 (March 1971). 5.

M. Jazayeri, W. F. Ogden and W. C. Rounds, 'On the Complexity of the Circularity Test for Attribute Grammars', in ConferenceRecord of the SecondACM Symposiumon Principles of Programming Languages, Association for Computing Machinery, New York, I975.

6.

W.J. Cody and W. M. W aite, SoftwareManualfor the Elementary Functions, Prentice Hall, Englewood Cliffs, NJ, 1980. A . V . Aho, J. E. Hopcroft and J. D. Ullman, The Design and Analysis of Computer Algorithms, Addison Wesley, Reading, MA, 1974.

7. 8.

R. Jeffries, A. T. Turner, P. G. Poison and M. E. Atwood, 'The Processes Involved in Software Design', in Acquisition of Cognitive Skills, J. R. Anderson, (ed.), Lawrence Erlbaum Associates, Hillsdale, NJ, 1981.

265

9. 10. 11.

12. 13.

14. 15. 16. 17.

18. 19. 20. 21. 22.

23.

24.

W.M. Waite and G. Goos, Compiler Construction, Springer Verlag, New York, NY, 1984. D.A. Schmidt, Denotational Semantics, Allyn and Bacon, Newton, MA, 1986. U. Kastens and W. M. Waite, 'An Abstract Data Type for Name Analysis', CU-CS460-90, Department of Computer Science, University of Colorado, Boulder, CO, March 1990. F.L. DeRemer and H. Kron, 'Programming-in-the-large versus Programming-in-thesmall', in Proceedings of the International Conference on Reliable Software, 1975. U. Ammann, 'Die Entwicklung eines PASCAL-Compilers nach der Methode des Sm~turierten Programmierens', Ph.D. Thesis, EidgenSssischen Technischen Hochschule Ziirich, Ziirich, 1975. R.K. Johnsson, W. A. Wulf, C. M. Geschke, S. O. Hobbs and C. B. Weinstock, The Design of an Optimizing Compiler, American Elsevier, New York, 1975. E. Zimmermann, U. Kastens and B. Hutt, GAG: A Practical Compiler Generator, Springer Verlag, Heidelberg, 1982. R.K. Jullig and F. DeRemer, 'Regular Right-Part Attribute Grammars', SIGPLAN Notices, 19, 171-178 (June 1984). U. Kastens, 'LIDO - - A Specification Language for Attribute Grammars', Betriebsdatenerfassung, Fachbereich Mathematik-In formatik, Universit~it-GH Paderborn, Paderborn, FRG, October 1989. T. Kamimura, 'Tree Automata and Attribute Grammars', Information and Control, 57, 1-20 (1983). U. Kastens, 'Ordered Attribute Grammars', Acta lnformatica, 13, 229-256 (1980). M.L. Hall, 'The Optimization of Automatically Generated Compilers', Ph.D. Thesis, Department of Computer Science, University of Colorado, Boulder, CO, 1987. U. Kastens, 'Lifetime Analysis for Attributes', Acta lnformatica, 24, 633-651 (1987). R.W. Gray, V. P. Heuring, S. P. Krane, A. M. Sloane and W. M. Waite, 'Eli: A Complete, Flexible Compiler Construction System', SEG 89-1-1, Department of Eleclrical and Computer Engineering, University of Colorado, Boulder, CO, June 1989. R.W. Gray, 'Declarative Specifications for Automatically Constructed Compilers', PhD Thesis, Department of Computer Science, University of Colorado, Boulder, CO, December 1989. M. Ishikawa, 'Local Attributes for OAG-Based Attribute Evaluators', CU-CS-472-90, Department of Computer Science, University of Colorado, Boulder, CO, May 1990.