Dialog Processing - Semantic Scholar

8 downloads 0 Views 91KB Size Report
well as for the automatic generation of dialog result summaries. Our component ..... time on the level of the associated principal unit (“day”). These instances ...
Dialog Processing Michael Kipp, Jan Alexandersson, Ralf Engel and Norbert Reithinger DFKI GmbH, Saarbr¨ucken, Germany

Abstract. This chapter explains the major functionality of the dialog module in Verbmobil. Dialog knowledge is needed for context sensitive speech translation as well as for the automatic generation of dialog result summaries. Our component produces necessary structures for both purposes and stores them in a centrally accessible data repository — the dialog memory. The structures are based on robustly extracted shallow data which are corrected, extended and structured by our dialog processor. We use time and object completion algorithms to collect context data and compute inter-object relations to infer relevance for summarization. The resulting structures are used by the document generator for dialog minutes and summaries, and by the context evaluation module for translation disambiguation.

1 Introduction It has become common knowledge that translation of spontaneous speech needs considerable amount of context knowledge if it is to be unambiguously understood. This context consists of world knowledge and knowledge of the preceding dialog in its semantic and pragmatic dimensions. To collect and reasonably structure this knowledge has been the twin task of the dialog and the context evaluation module in the first phase of Verbmobil (Alexandersson et al., 1997). In the second phase, another challenge was added: to use the already assembled and structured data for the automatic generation of summaries and minutes. For dialog systems these are useful, sometimes essential features. Imagine a travel information dialog or a stock exchange broker advise system where the user will want a document as a reminder of the most important facts and as a confirmation of all agreements. Similarly, users of a translation service like Verbmobil will want to have a written proof of what has actually been translated by the system (minutes) as well as a concise documentation of all agreements (summary). In Verbmobil 2, the original task of providing context data for translation remained on our agenda but considerable changes in model and processing were made We would like to thank our former colleague Elisabeth Maier and the following student workers for substantial contributions and good cooperation over the past eight years of the Verbmobil project: Christoph Birkenhauer, Amy Demeisi, Thomas Fuchs, Tilman Jaeger, Martin Klesen, Stephan Lesch, Norbert Pfleger, Christian Pietsch, Bouchra Rouiched, ¨ Ozlem Senay, Paula Sevastre, and Arne Zenner.

to accommodate the new summarization function. Therefore, a close cooperation with the context evaluation module (Chapter ??) was established. Both modules collect and process context knowledge about the dialog from different sources (context evaluation: deep parsing, dialog: shallow extraction). This data is then exchanged between the two modules and integrated in each other’s internal structure. In all further processing, the context evaluation module provides knowledge for translation purposes (e.g. for semantic transfer) whereas our dialog module builds suitable structures for summary generation. In the further course of this chapter we will first outline the general architecture of the dialog module (Section 2) before focussing on our particular view on negotiation dialogs designed to serve the needs of summarization (Section 3). The different algorithms mentioned in this section will be explained in some depth in Section 4. We finish with a conclusion that sums up our approach.

2 Architecture The dialog module consists of three components: dialog memory, plan processor and dialog processor. The dialog memory serves as a communication blackboard for the two processing components as well as for other Verbmobil modules. The two processing components incrementally infer new knowledge from the given data, storing it in the dialog memory for further usage. 2.1 Dialog Memory The dialog memory is a central repository for all dialog data that needs to be preserved after the processing of one turn. It receives data from various other modules and provides data on request. Every module belongs to a distinct translation track: deep, shallow, statistical, example-based. Each translation track has its own segmentation within one turn. The dialog memory must therefore store distinct lists of segment objects for each turn and track. A segment object can contain dialog act, topic, content expression, VIT, and dialog phase information. To uniquely identify a segment, we store turn number, begin/end time and translation track. Our component receives dialog act and shallow content structures from the syndialog module (Chapter ??), deep content structures from the context evaluation module (Chapter ??), VITs from the robust semantics module (Chapter ??) and string representations of the translated utterance from all generation modules. A GUI displays the most important information stored in the dialog memory (see Figure 1). The current topic is displayed as a highlighted button in the topmost pane. Underneath, the currently focussed content object is shown as a graph. The two lower panes display the speakers’ utterances as recognized by speech recognition (left) and a structured view on the dialog acts (right). Modules can request data like dialog act or semantic content via the Verbmobil communication system (see Chapter ??), even across tracks. E.g., the Semantics module (deep track) retrieves dialog acts stemming from the syndialog module

Figure 1. GUI of the dialog component within the Verbmobil system.

(shallow track). Also, the context evaluation module (deep track) receives inferred content data from the dialog module (shallow track). If a cross-track request for a particular segment arrives that does not match exactly with an existing segment (in begin and end time) we use the segment that has maximum overlap (at least 30%). 2.2 Plan Processor We use the plan processor (Alexandersson and Reithinger, 1997) for the recognition of dialog phases, moves, and games. The dialog phase (e.g. OPENING, NE GOTIATION , CLOSING ) is used by transfer for disambiguation – “Guten Tag” is translated to “Hello” or “Good bye” depending on the state of the dialog, whereas moves and games serve as a basis for the generation of dialog minutes (see Chapter ??). Our approach is based on plan recognition: Vilain (1990) shows that plan recognition can be viewed as parsing and thus, that plan operators can be viewed as rules in a context-free grammar. In the course of the dialog, a tree (see Figure 2) consisting of four levels is construed, where the leaves correspond to segments (represented by dialog acts), and the top node represents the whole dialog.

(APPOINTMENT NEGOTIATION) dialog

(NEGOTIATION)

(OPENING)

dialog phase

(INTRO-COMPLETE)

(INTRO)

(INTRO)

(GREET) (INTRODUCE) (GREET)

A

B

game

(I-R)

(INITIATIVE)

(SUGGEST)

A

(FEEDBACK)

B

(RESPONSE)

move

(REJECT)

dialog act

speaker

Figure 2. The plan tree with its four levels. From the top the negotiation dialog is split into dialog phases. The phases ares split into games which are divided into moves, which (finally!) are based on the utterances. The bottom indicates the speaker direction.

The bottom level glues utterances into moves, the second combines moves into games, the third combines games into phases, and the phases eventually contribute to the dialog as a whole. Basic moves for negotiation dialogs are initiative, response, and transfer-initiative. These can be combined into, e.g., initiative-response, and transfer-initiative-initiative-response. Additional games for opening and closing as well as clarification dialogs have been added. The plan operators for the two lowest levels have been automatically derived. For the lowest level, the terminal plan operators (which correspond to the leaves in the tree) are compiled out from the dialog act hierarchy. The plan operators for the moves are derived using a grammar learning algorithm (see Stolcke, 1994a, and Stolcke, 1994b). The rest of the operators is hand coded. For the segmentation of a turn into games, we use a combination of knowledgebased and statistical methods. We first determine the direction of a segment. A segment is either backward, forward or neutral looking. A FEEDBACK dialog act is always backward looking, whereas a SUGGEST act is almost always forward looking. As soon as the direction changes from backward to forward looking, the turn is segmented. Parallel to this, a statistical method based on language models (as described in Chapter ??) is used. For each move type a language model has been trained using dialog acts instead of words. A turn is segmented when both of these methods produce a positive validation of a segmentation suggestion. The language models also predict which game the segment corresponds to. Beside constraints, the plan operators are augmented with actions allowing for side effects. In the case of plan operators for the construction of the dialog tree, these

actions contain selection criteria for those segments that contribute to the dialog minutes. 2.3 Dialog Processor The dialog processor takes dialog act and content representation from the dialog memory to produce contextually enriched structures that form the building blocks for the dialog summary. Our view on summarization is tightly linked to the underlying task of negotiation. When we talk about a result summary it means that we only want to mention those objects that all speakers agreed on. Moreover, we want a compact presentation of these facts. In the course of a dialog many suggestions are brought forward, some of which are accepted, others rejected, some just forgotten and never mentioned again. In a word, the information is scattered across the dialog. Hence, the dialog processor bundles singular data together to form suggestions while keeping track of explicit and implicit statements of acceptance and rejection.

3 Modeling Negotiation Dialogs The dialog representation as described in Chapter ?? uses dialog acts (Alexandersson et al., 1998) and content descriptions to model single utterances. This is reasonable from a translation point of view. When trying to reason about the dialog’s results, though, it is necessary to map this model to a simpler model. We call this our negotiation model. In this simplified model we blend out all information that we consider irrelevant for the dialog’s results. Since this “blending out” is based on statistically obtained dialog act, our approach bears similarity to common summarization ideas (see Mani and Maybury, 1999, for an overview). We model a negotiation in terms of topics and negotiation objects (NeO’s) and negotiation acts. Topics tell us what kind of items are being negotiated. These items are represented by NeO’s and finally, negotiation acts tell us what objects are part of a suggestion and signal the speakers’ attitudes (accept/reject). We also keep track of the relations between suggestions (more specific/general). This allows us to select the summary items for generation: the most specific accepted suggestions. A schematic overview of this processing is depicted in Figure 3. 3.1 Topics Topics partition our domain into four areas: scheduling, traveling, accommodation and entertainment. To find the topic of an utterance we use heuristic rules based on keywords and the preceding topic. Within one topic the speakers are assumed to negotiate a limited set of objects (e.g. objects of the class journey, move and book action for the traveling topic). We keep a set of templates for each topic where incoming suggestions are integrated to obtain an object we call a negotiation object (NeO). In Figure 4 the

EXTRACTION

dialog acts

extracted objects

PROCESSING

negotiation acts

SUGGEST

PROPOSE

ACCEPT

FEEDBACK

REJECT

FEEDBACK

SUGGEST

PROPOSE

SUGGEST

PROPOSE

ACCEPT

FEEDBACK

INFORM

ELABORATE

negotiation objects

GENERATION

result objects

more_specific_than

Figure 3. Schematic depiction of the summarization process: dialog acts are mapped to negotiation acts which control object completion. Attitude annotations (from feedback acts) and inter-object relations (moreSpecificThan etc.) determine the final selection for the result summary.

original extracted object (ExO) of utterance B2 is integrated in a journey template. For each topic we keep topic-specific information in a topic frame. Thus, all suggestions (NeO’s) made for one topic are stored in a topic-specific focus list. 3.2 Negotiation acts Whereas the topic serves to insert the ExO into a template to create a NeO, the negotiation act determines how to handle the resulting NeO pragmatically. In every negotiation there are essentially four actions that a speaker can perform (1) PRO POSE an object of negotiation, (2) give FEEDBACK on a former proposal, (3) ELAB ORATE a former proposal by adding matter-of-fact information, or (4) REQUEST task-related information. This information is contained in the dialog act. Thus, we use a direct mapping to retrieve the negotiation act which in turn controls further processing of the NeO (see Table 1). Given the importance of the dialog act we check the correctness using semantic and pragmatic constraints. Erroneous dialog acts are reinterpreted as detailed in Section 4.4. 3.3 Structuring We give an overview of the processes triggered by a given negotiation act by means of an example (Figure 4) and show how the resulting objects are structured. Detailed processing data is presented later in Section 4.

Table 1. The mapping from dialog act to negotiation act and respective processing Dialog act SUGGEST, INIT OFFER , COMMIT ACCEPT, REJECT EXPLAINED REJECT INFORM REQUEST

Negotiation act PROPOSE

FEEDBACK

Processing (1) complete object (2) compute relation to focussed object (3) focus object annotate focussed object with acceptance/rejection

ELABORATE

merge object with focussed object

REQUEST

store object in temporary memory

Figure 4 depicts the utterance, extracted objects (ExO’s) and negotiation objects (NeO’s) of a dialog excerpt. The proposal in B2 “there’s one at six forty-five” obviously relates to the departure time of the train suggested in A1 “let’s take the train to Frankfurt”. Our completion process takes care that the NeO of B2 is expanded to represent the whole implicit suggestion (see Sections 4.1 and 4.2 below). At this point we also compute the moreSpecificThan relation of the new suggestion to all other suggestions made to this point (see Section 4.3) and add the NeO to the topic focus list. Feedback utterances like “alright”, “good”, “no”, “that doesn’t work” etc. make the dialog processor add a respective acceptance/rejection mark to the head NeO in the focus list. A schematic depiction of a possible dialog is depicted in Figure 3. It shows how attitude annotation and inter-object relations are used to select summary items. We take all objects marked with at least one accept and no reject attitude, and ignore all objects that are related to a more specific item. The summary items are passed to document generation (see Chapter ??) for final output to the user.

4 Processing This section will explain the most important processing issues we have mentioned above. These are: (1) the contextual completion of time and content objects, (2) the computation of inter-object relations and (3) the semantic re-interpretation of dialog acts. 4.1 Completing time expressions Time expression are received from the shallow extraction in the semantic interlingua TEL (see Chapter ?? and Endriss, 1998). Obtaining complete dates and times is important for the result summary as well as for speech translation. In this section we offer a formal definition of completeness for a given time expression and present the basic resolution algorithm for incomplete expressions.

UTTERANCE

EXTRACTED OBJECTS

NEGOTIATION OBJECTS . . .

template A1:

let’s take the train to Frankfurt

move

destination

(SUGGEST) transportation

city name=’frankfurt’

destination

move move

journey

rail

transportation

city name=’frankfurt’ rail

sponsoring

template B2:

there’s one at six fortyfive (SUGGEST)

time "6:45 h"

departure_time

move journey

time "6:45 h"

move

completion destination

move journey

move

transportation departure_time

city name=’frankfurt’ rail time "20. May 2000, 6:45 h"

Figure 4. Dialog excerpt showing recognized utterance (left), extracted objects (ExO’s, middle) and negotiation objects (NeO’s, right) derived by template filling and completion with a sponsoring expression.

Problem We practically never talk in absolute dates like in “how about Monday, the eighth of May 2000?”. Instead we use short cuts like “next Monday”, leaving the utterance, if seen without context, incomplete. We distinguish between: – Anaphor: references to temporal entities already introduced in the discourse, e.g. “then”, “on that day”, “on the same day” – Ellipsis: skipping information that is clear from the context, e.g. “on the 15th” which could refer to the current month at the time of speaking or to a month mentioned earlier in the discourse – Deixis: references relating to the current time of speaking like “now”, “today”, “tomorrow”, “this week” What does it mean for a time expression to be complete, i.e. to be absolutely determinable on a calendar? We present a formal definition based on the structure shown in Figure 5. Model We define a set of classes called principal units year season month week day partOfDay hour minute

year

yearNumber

season

seasonName

month

monthName

weekOfYear

week

weekOfMonth

holiday dayOfMonth weekdayName

day

partOfDay

hour

minute

partOfDayName

hourNumber

minuteNumber

Figure 5. The temporal specification graph shows all principal temporal units (middle axis, the two dashed boxes indicate optional units) and their possible specifications (boxes to the left and right).

and order them by temporal duration: year  season      minute Principal units themselves cannot be instantiated because there is sometimes more than one way of specifying a unit, e.g. “Monday” and “the fifteenth” both specify a day. So we introduce a set of subclasses and call them units



yearNumber seasonName     minuteNumber

Instances  (e.g. “Monday”) of a unit    (“weekdayName”) specify a point in time on the level of the associated principal unit (“day”). These instances which we call temporal objects depend on time specifications on a higher principal unit   to be uniquely specified (e.g. an instance of weekdayName needs an instance of weekOfYear or weekOfMonth). We call this relation the specifies-relation (e.g. “weekdayName” specifies “week”, see arrows from units to principal units in Figure 5). A temporal expression  is a set of temporal objects.  is well-defined iff for every unit    there is at most one instance in  . We are now ready to define what it means for a temporal expression to be complete. Visually, one can imagine each temporal object of the expression covering one

unit box in the graph in Figure 5. Completeness means that we can trace a path from the bottommost “covered” unit box to the topmost principal unit “year” crossing only covered unit boxes. Formally, let  be a temporal expression. A temporal object    is complete in  iff principalUnitunit year 

   A principal unit  



specifiesunit   complete in 

is complete in  iff

temporal object     principalUnitunit   unit complete in    temporal object     specifiesunit   optional   year  succcomplete in   And, finally, a temporal expression  is complete iff

  complete in        principalUnitunit      

 principal unit   ¼ 

¼

¼

This model for temporal expressions is the basis for our completion algorithm which basically tries to take over different objects from a sponsoring expression and checks for completeness of the newfound expression. Sponsor and Focus Before we can resolve a time expression we need to find its sponsor, also called antecedent in the literature. 1 Note that if we always took the preceding expression as a sponsor for the current expression we would cover 95% of all cases according to a study by Wiebe et al. (1997). The other cases (of the Verbmobil corpus) consist mainly of generalizations and subdialogs. Consider the following excerpt: A01: How about the fifth? B02: Great. The whole week’s free. A03: So let’s meet at two. In B02 the speaker introduces a generalization of the preceding time expression. This is a problem for the expression in A03 which cannot be resolved by the week information of B02 alone. We solve this problem by introducing a focus list where each new time expression  is added as a possible sponsor. Then, we have the possibility of excluding cases like the one in B02. The exclusion condition is 2 : 1

2

We prefer the expression sponsor that has also been used in the dialog processing community, e.g. by LuperFoy (1991), because antecedent usually refers to linguistic context whereas we also have to consider situational context, i.e. time of the conversation, as a possible sponsor. where  is the head element of the focus list

if dialog act = accept and  lessSpecificThan 3  then do not add  to focus list The same approach works with subdialogs that contain time expressions. After closing a subdialog former time expressions need to be accessed. These can be found in the focus list. Resolution For a given time expression  the main loop of the resolution algorithm steps through the focus list in search of a suitable sponsor. For each expression   in the focus list we try our function complete( ,  ) where  acts as the (potential) sponsor and  as the receiving expression. As an example, we try to complete  = hourNumber:2, partOfDayName:pm, dayOfMonth:15 with  = dayOfMonth:12, monthName:May, yearNumber:2000 The resolution algorithm takes all objects contained in  and tries to complete each of them, starting with the those objects with the biggest principal units (topmost in Figure 5). In case of success the respective principal unit is marked as being complete. In the example, in expression  we start by trying to complete object “dayOfMonth:15” whose principal unit is “day”. To complete the single object our algorithm follows the specifies-relation and tries to obtain an object of that kind from   . For “dayOfMonth” we need an object of principal unit “month” as the arrow in Figure 5 indicates. That unit is instantiated in   by object “monthName:May”. Before really adding the found object we try to complete  plus that object and see what happens. In case of success we continue the completion of the other objects of . We try completing  plus “monthName:May” using the same algorithm. This leads to  = hourNumber:2, partOfDayName:pm, dayOfMonth:15, monthName:May, yearNumber:2000. When trying to complete the objects “partOfDayName:pm” and “hourNumber”, their respective specifies-arrows point to units that have already been completed. So the algorithm takes no further measure there and terminates with the lowest unit (hourNumber) found completed. If all objects of  were completed successfully,  is complete. Otherwise, the main loop continues trying the next expression on the focus list as a sponsor. 3

equivalent to temporal relation “in” as defined by Allen (1983).

Bits and Pieces The algorithm is easily extended to cover more complex temporal expressions, i.e. modified, quantified, coordinated expressions and intervals. For a complete treatment see Birkenhauer (1998). We use a separate calendar component to compute relative time expressions (“two weeks after the fifth”), holidays (“Easter weekend”) and the temporal relations as defined by Allen (1983). The moreSpecificThan relation defined below (Section 4.3) is extended to temporal expressions by equivalence to Allen’s relation “in”. 4.2 Completing content expressions Completing negotiation objects (NeO’s) is similar to the completion of time expressions: for a new NeO we (1) find a suitable sponsor and (2) take over parts of the sponsor (see Figure 4). Both steps are again modeled by a single function complete( ,  ) which tries to complete using  as a sponsor, returning a boolean value for success or failure. By applying this function on every  on the focus list until it succeeds4 we find a sponsor and complete . The function complete works recursively through the object (and respective sub-objects of  ). It first checks certain preconditions: named entities (cities, persons etc.) can only be sponsored by objects with equal name, move objects must have certain temporal properties (move back after move there) and so on. If the preconditions hold all subtrees of  that do not occur in are added to (see Figure 4) Under certain conditions relations can be specialized (e.g. has time to has departure time). Note that since  is already a completed object, we obtain a complete object without further processing of other preceding objects. 4.3 Inter-object relations To determine the most specific accepted negotiation object for each topic we need to compute the relations moreSpecificThan, moreGeneralThan and equalTo for each pair of NeO’s. They are defined recursively. The relation moreSpecificOrEqual( ½ ¾ ) for two NeO’s ½  ¾ holds iff – root of ½ is of the same class or a subclass of ¾ – for every relation  ¾  ¼  there is a relation  ½  ¼¼  and the relation moreSpecificOrEqual( ¼¼ ¼ ) holds where ¼ and ¼¼ are NeO’s, temporal objects or primitive data types 5 The relation equalTo is defined analogously and the other two follow quite easily:

4

5

moreSpecificThan(½  ¾ )

moreSpecificOrEqual(½  ¾ )

moreGeneralThan(½  ¾ )

moreSpecificThan(¾  ½ )

  equalTo(½  ¾ )

We found it useful to introduce an upper bound for the number of objects being tested by e.g. 3 (recency threshold). We extend moreSpecificOrEqual to strings, numbers and booleans by using the respective equal relations. In case of temporal objects the relation maps to Allen’s in same relations.



4.4 Dialog act reinterpretation The dialog act plays a key role since it determines which internal dialog actions to apply on the semantic data. This dialog act, however, stemming from statistical analysis could be falsely classified. Some of these errors can be detected by semantic or pragmatic inconsistencies. Consider: A: yes. will you please make reservations at a hotel? [COMMIT,accommodation, has book action:[book action,has agent:[addressee], has book theme:[hotel]]]

The dialog act COMMIT by definition signifies that the speaker is willing to do some action in the future (e.g. booking a hotel). In this utterance the only action is one performed by the addressee, marked by has agent:[addressee], so that we can savely assume that this is a false classification and that the actual dialog act is a REQUEST COMMIT. Likewise, we defined a set of rules that take topic, dialog act history, content expression as conditions and trigger the so-called reinterpretation of the current dialog act. The new dialog act can be explicitly stated in the rule (like in the above example) or taken from the list of most probable dialog acts stored in the dialog memory. Since reinterpretations of dialog acts can occur any time during dialog processing we have to loop back and redo all prior dialog processing actions (since those actions depended on the dialog act).

5 Conclusion We presented the dialog module in its main function of collector and provider of context information. Our focus having been summarization for the second phase of Verbmobil we explained in detail how complete negotiation units can be obtained and selected for the generation of a multi-lingual dialog summary. In conjunction with Chapter ?? about the context evaluation module it is easy to see that our enriched structures can help in disambiguation of transfer requests. One good reason for splitting summarization and translation context is the fundamentally different focus mechanism which highlights summarization objects in our case and linguistically relevant entities in case of the context evaluation module. We have not touched the question of evaluation which is an important factor in dialog and summarization systems. For our case, it would be reasonable to compare human-made summaries with our automatically generated ones. This task poses some problems. First, how do we instruct the human coder to summarize (without telling him/her to exactly behave like our algorithms)? Second, errors in speech recognition sometimes leads to output that is hard to interpret even for human readers, so that it is not always clear what the speakers agreed on. Nevertheless, in a first tentative study on four dialogs, assuming perfect recognition, we achieved a recall of 67% and a precision of 81% which indicates good performance and serves as a starting point for further evaluation studies.

References Alexandersson, J., and Reithinger, N. (1997). Learning dialogue structures from a corpus. In Proceedings of EuroSpeech-97, 2231–2235. Alexandersson, J., Reithinger, N., and Maier, E. (1997). Insights into the Dialogue Processing of V ERBMOBIL. In Proceedings of the Fifth Conference on Applied Natural Language Processing, ANLP ’97, 33–40. Alexandersson, J., Buschbeck-Wolf, B., Fujinami, T., Kipp, M., Koch, S., Maier, E., Reithinger, N., Schmitz, B., and Siegel, M. (1998). Dialogue Acts in VERBMOBIL-2 – Second Edition. Verbmobil-Report 226, DFKI Saarbr¨ucken, Universit¨at Stuttgart, Technische Universit¨at Berlin, Universit¨at des Saarlandes. Allen, J. F. (1983). Maintaining Knowledge about Temporal Intervals. Communications of the ACM 26(11):832–843. ¨ Birkenhauer, C. (1998). Das Dialogged¨achtnis des Ubersetzungssystems Verbmobil. Unversit¨at des Saarlandes. Diplomarbeit. Endriss, U. (1998). Semantik zeitlicher Ausdr¨ucke in Terminvereinbarungsdialogen. Verbmobil-Report 227, Technische Universit¨at Berlin. LuperFoy, S. (1991). Discourse Pegs: A Computational Analysis of Context-Dependent Referring Expressions. Ph.D. Dissertation, University of Texas at Austin. Mani, I., and Maybury, M., eds. (1999). Advances in Automatic Text Summarization. MIT Press. Stolcke, A. (1994a). Bayesian Learning of Probabilistic Language Models. Ph.D. Dissertation, University of California at Berkeley. Stolcke, A. (1994b). How to Boogie: A Manual for Bayesian Object-oriented Grammar Induction and Estimation. International Computer Sience Institute of Berkley, California. Vilain, M. B. (1990). Getting Serious about Parsing Plans: a Grammatical Analysis of Plan Recognition. In Proceedings of American Association for Artificial Intelligence, 190– 197. Wiebe, J., O’Hara, T., McKeever, K., and Oehrstroem-Sandgren, T. (1997). An empirical approach to temporal reference resolution. In Cardie, C., and Weischedel, R., eds., Proceedings of the Second Conference on Empirical Methods in Natural Language Processing, 174–186. Providence, Rhode Island: Association for Computational Linguistics.

Suggest Documents