Formalising Hinting in Tutorial Dialogues - CiteSeerX

1 downloads 0 Views 91KB Size Report
check for the origin of mistake. At this level, if the answer is correct, the hinting pro- cess will end. If not, the tutor will choose the appropriate hint according to the ...
Formalising Hinting in Tutorial Dialogues Dimitra Tsovaltzi Computational Linguistics University of Saarland Germany

Abstract The formalisation of the hinting process in Tutorial Dialogues is undertaken in order to simulate the Socratic teaching method. An adaptation of the BE&E annotation scheme for Dialogue Moves, based on the theory of social obligations, is sketched, and a taxonomy of hints and a selection algorithm is suggested based on data from the BE&E corpus. Both the algorithm and the tutor’s reasoning are formalised in the context of the information state theory of dialogue management developed on the trindi project. The algorithm is characterised using update rules which take into account the student model, and the tutor’s reasoning process is described in terms of context accommodation.

1

Theoretical Background

1.1 Information State Update The formalisation of the hinting process uses the Information State (IS) approach, in which dialogue is seen in terms of the information that each participant has at every point in the interaction. The formal account thus accords with the proposals put forward on the trindi project1 (Bohlin et al. (1999), Larsson et al. (1999)). The IS representation consists of fields containing different kinds of information about the dialogue. This information is updated after each new utterance using update rules of various types. The latter include details of how each field, or information attribute, in the IS should be updated, if at all.2 The background 1

Telematics Applications Programme, Language Engineering Project LE4-8314. 2 Figure 2 contains an IS with relevant fields, and a commented instance of an update rule can be found in example (4) below.

Colin Matheson Language Technology Group University of Edinburgh Scotland

theories that inform the IS update formalisation assumed here are the obligation-driven approaches discussed in Matheson et al. (2000) and Traum and Allen (1994), and Context Accommodation as described in Cooper et al. (2000), Kreutel and Matheson (2000), and Larsson et al. (2000). 1.2

Discourse Obligations

The theory of obligations introduces the notion of discourse and social obligation as a way of analysing some of the social aspects of interactions and provides an explanation for behaviour that other theories do not predict. It is an augmentation to the representation of the intentions of dialogue participants that attempts to capture the natural flow of conversation. There are different genres of obligation-based research, as evidenced by the above references. Treating tutorial dialogues in terms of obligations is an intuitive way of analysing and predicting some specific kinds of dialogue behaviour that do not seem to follow the rules of everyday discourse. For example, applying the Shared Plans theory (see Rich and Sidner (1998)) to tutorial dialogues would soon prove problematic since the tutor does not follow the principle of co-operativity which is central to the theory. A prerequisite for achieving goals, according to Shared Plans, is that the dialogue participants should be as clear as they can about their beliefs and plans, when the one thing that the tutor avoids doing in the hinting process is exactly that. For example, SharedPlans do not explain why the tutor in utterance T[2] in example 1 below does not just tell the student what a sine-wave is, which she could easily do. According to the obligation-driven framework, on the other hand, intentions are neces-

sary but not the only driving force behind an utterance. For example, only if one considers it the student’s obligation to address the tutor’s questions and follow her directives can one interpret the total lack of overt signals from the student that he intends to cooperate, such signals being normally central to collaboration. In the context of the overall obligations the tutor knows what the student’s intentions are, and will be able to interpret his actions correctly because the tutorial dialogue genre will not permit any other behaviour, since it is the student’s obligation to follow the tutor’s directives.

2

Obligations and Dialogue Moves

2.1

Dialogue Moves in the BE&E Corpus An informal analysis of the dialogue moves that are observed in the BE&E corpus has been sketched taking into account considerations based on obligation theory as outlined above. Moves that do not contain any particular interest for the representation of obligations have generally been adopted from the BE&E annotation scheme. The BE&E (Basic Electricity and Electronics) corpus consists of recorded human-to-human dialogues that were carried out via a computer keyboard. The student performs actions in a lab, represented by a GUI, towards a specific target such as measuring current. The tutor observes the actions and intervenes when necessary, or when the student asks her to.3 The move hint is given here as an example for obvious reasons. 2.2 The Hint Move It is clear from the corpus that the student is obliged to address the tutor’s utterances, whereas the tutor hardly ever answers questions directly, contrary to the norm outside the genre. Instead, she gives hints which withhold the answer. A hint is a tutorial strategy that aims to encourage active learning. It can take the form of eliciting information that the student is unable to access without the aid of prompts, or information which he can access but whose relevance he is unaware of with respect to the problem at hand. Alternatively, a hint can point to an inference that the student is expected 3 Also see http://www.hcrc.ed.ac.uk/∼jmoore /tutoring/BEE corpus.html

to make based on knowledge available to him, which helps the general reasoning needed to deal with a problem (Hume et al., 1996). Partial answers from the tutor discharge the obligation to address the student’s questions. There are examples in the BE&E corpus where the tutor explicitly states her method or the student shows he is aware of it, such as “Very good. You answered your own question” or “I’ll give you another hint.”, from the tutor and “ I need another hint”, from the student. The initiation of hints can be due to various reasons: (i) the tutor observes that the student is not making any progress in the task, or that he is taking steps in the wrong direction (ii) the student asks a question and the tutor does not want to answer it directly (iii) the student gives the wrong answer or asks the wrong question in response to a tutor’s question Figure 1 illustrates some of the points briefly discussed above. The tutor gives a few hints to try to make the student follow her reasoning (in T[2], T[4], T[6], and T[7]). She realises, however, that the student does not remember the lesson very well; he is so bad at interpreting her hints that she is forced to give explanations about basic concepts (T[9]). She therefore asks him to read the lesson again (T[11]), not wanting to just give the answers away.

3

The Hinting Strategies

Hint is one of the moves that appear in tutorial dialogues. Although the surface structure of hints is heterogeneous, there appears to be an underlying structure common to different categories of hints, and we undertook the formalisation based on these perceived regularities. The taxonomy of strategies formalising the hinting process in our model entails the following hints: Pragmatic Hint, Relevant Concept, Following Step, Speak to Answer, Encourage, Rephrase Question, Logical Relation, Move Backwards, More General Concept, Spell Out the Task, Explain Fundamental Principles of Domain, Narrow Down Choices, Explain, and Point to Lesson. The names are intended to be as descriptive of the content as possible, and should in some cases be self-explanatory. Some

S[1]: T[2]: S[3]: T[4]: S[5]: T[6]: T[7]: S[8]: T[9]:

S[10]: T[11]:

I have no idea what a sinewave is. Was this covered in the tutorial? Yes, remember the wave that represented alternating current in the lesson? I think i remember it being represented as a ∼ on the ammeter control panel OK, that’s true about the multimeter’s function dial. But do you remember Nope a graph of a wave in the lesson that represented alternating current? (20 sec later) Do you remember reading about frequency and amplitude and all that? I’m not sure. Is this a trick question to see if you can get me to invent a memory? No, this is not a psychology experiment. :) I’m just trying to see how much you remember. A sinewave starts out at 0 and increases to the maximum amplitude then decreases past 0 in the negative direction and then returns to 0 again. Does any of this ring a bell? It really doesn’t. But I think I’m following your explanation. Go ahead and reread the lesson.

Figure 1: Example tutorial dialogue of the strategies are not real hints (for example Point to the Lesson), but they have been included in the taxonomy because they are part of the general hinting process. Although the strategies were derived by looking at data from a specific domain, namely Basic Electricity and Electronics (BE&E), the aim was to abstract from the particular characteristics of that domain and produce a domain-independent taxonomy, to the extent that this is possible. The hinting strategy can be realised either by giving a small clue which the tutor deems necessary to initiate the desired reasoning process, or by eliciting the clue itself, that is by prompting the student to produce it. These bottomlevel moves are called informs and t-elicits respectively. 3.1 An Example of a Hinting Strategy Example (1) below contains an instance of the hint relevant concept. In employing this

strategy the tutor points to concepts that are relevant to the current problem in order to trigger the information in memory which is required, or in order to correct the student’s erroneous reasoning. This can be done by asking a question the answer to which is the relevant concept, or the answer to which is only part of the concept, by asking about the relevant concept or by simply mentioning it. The relation of the concept pointed to and the one at hand, whether they are opposite, similar, related to the problem in a similar way, and so on, is made explicit: (1)

T[1]: S[2]:

T[3]:

S[4]: T[5]:

...What are the instructions asking you to do? Make a circuit between the source(battery) and the resistance (rheostat) and then attach the miliampmeter to measure the resistance in ohms. OK, you’re close. But keep in mind that a miliampmeter is a special case of an ammeter. Do you remember what an ammeter measures? Amps? Right, very good...

In example (1) the tutor is trying to make the student see that the meter does not measure Ohms but Amperes. In utterance T[3] she brings up the notion of ammeter, which also measures Amperes. This strategy is analysed here as an example of a relevant concept style of hint (formalised in (2) below). She hopes that the student will remember this and will infer that the meter measures Amperes and, thus, current. The ultimate aim of course is that the student will realise that he must use the ohmmeter in order to measure resistance in Ohms, which is the problem at hand. The student follows the hint, as indicated by S[4].

4

Modelling the Hinting Process

In order to model the hinting process an algorithm was derived based on the examples from BE&E corpus. It takes into account the current student answer and the number of wrong answers encountered so far in the dialogue as a means of deciding on the student model and on which hint to generate. This gives emphasis to the local student model which has been found to be more important than the global

one (Freedman et al., 1998). Six categories of student answers according to their performance were judged necessary based on the data: correct, near miss, partially correct (only part of the correct answer was given), grain of truth (there is an indication of some understanding of the problem but the answer is wrong), wrong, misconception (typically confused concepts and suchlike). The initiation of the hinting process, for example when the student performs a wrong action in the lab, and the conclusion, perhaps when he does not need more help, were also modelled by the algorithm. The algorithm itself was modelled using update rules in the trindi format. Section 4.1 includes an example of an update rule whose preconditions and effects were derived based on the algorithm. The algorithm is a comprehensive guide and we do not claim that it accurately predicts all the dialogues in the corpus. This inaccuracy is largely due to the inconsistency observed in the human tutor behaviour, which is not pedagogically justifiable. We have tried to overcome these inconsistencies by normalising the behaviour based on the theory behind the Socratic method, and hence sacrificing flexibility. Nevertheless, we have no evidence as to whether consistency of an effective method or flexibility is better. The algorithm takes into account the fact that every time there is a new answer from the student the local student model, the performance in one hinting session, changes. The tutor’s hint reveals as much information as needed based on the understanding demonstrated by the current answer. So, when the hinting session has just started it is easier to terminate it, if the student seems to follow. However the hinting strategies used are not very revealing. The tutor will start the hinting session when the student makes a mistake. If she is not certain of the reason behind the mistake, she will do check for the origin of mistake. At this level, if the answer is correct, the hinting process will end. If not, the tutor will choose the appropriate hint according to the type of answer the student gives. All hints at this second level are more informative, since the student has already been given a less informative hint and couldn’t follow it. After the second hint, the

hinting process will continue even if the student gives a correct answer as the tutor is reluctant to let the student carry on by himself. Performance so far has not been good, so she wants to guide the student and make sure he understands the whole task. The hints are yet more helpful. After three hints in a row, the tutor will require only correct answers in order to continue hinting. In any other case the student model is too bad, since by now the hints are quite revealing and are still not being followed. In order both to avoid frustration on the student part, and to preserve the effectiveness of the Socratic method, the student is given a brief explanation. After the explanation, the tutor will check for understanding again. If a correct answer is still not forthcoming, the student will be referred to the study material, which he obviously has not read properly. The aim is not to teach the material, but to help the student to assimilate it, once read, and to be able to apply it. 4.1

The IS Update Rules: The Basics of the Formalisation

The formalisation of the hinting process follows the approach outlined in Matheson et al. (2000) in using detailed representations of information states and in handling the participants’ deliberations using update rules some of which are specific to the particular genre of dialogue. Thus some of the rules contain conditions and effects specific to the hinting process in tutorial dialogues. The conditions must be satisfied in order for the IS to be updated, and the kind of update that will take place is defined by the effects. Example (2) contains an update rule which models the circumstances in which the hinting strategy relevant concept should be generated (the notation used in the rule is described in sections 5 and 6 below). This strategy is exemplified in utterance T[3] in (1) above. The student is responding to a check for the origin of the mistake in utterance T[1] in (1), but the answer given in S[2] is wrong.4 Therefore, the tutor points to a relevant concept. 4

Three kinds of answers are classified as “wrong”: genuinely wrong answers, wrong steps in the task and no answer at all.

(2) name: cond: effect:

relevant concept in(IS.DH,or mistake(T)) in(IS.LM,wrong answer(S,DA)) push(IS.INT,ack(T)) push(IS.INT,rel concept(T)

The conditions for the hinting formalisation presented here are the ones that must hold according to the algorithm. Briefly, the conditions state that the dialogue history contains a check for the origin of the mistake (or mistake) and that the latest move is a wrong answer. The effects are that the tutor has intentions to acknowledge the student’s utterance and to perform a relative concept hint. The rule thus implements both the student model, determining if a hint should be generated, and also determines how informative the hint should be, via the effects. The type of hint to be generated depends on the type of answer elicited, and this is also represented in the effects, as shown.

5

The Tutor Model

dition. Therefore, if there is more than one way of reasoning, all the options should be included in the database. The same goes for the preconditions and the plans. The agenda holds the subplan at hand, with its preconditions and decompositions, as well as realisations (fixed utterances) for the particular content of every decomposition. In a system that provides a dynamic language generation module, the relevant hint, the decomposition, and all discourse planning information can be passed on, allowing the module to generate an appropriate phrase. A plan of the kind described for the BE&E corpus can be seen in 3. The decompositions (dec) refer only to the last precondition (prec) here. (3)

header: prec:

dec1: dec2: dec4: dec5: dec6: dec7: dec8: dec9: dec10: subeffects:

Voltage Lab: Measure Voltage(VDC) Polarity observed Meter on while making measurement Meter set to VDC Both leads attached Meter attached Circuit not powered down Difference in charge between the two ends of the meter when attached to the circuit vol.difpot.meas vol.meas.so vol.meas.lo vol.difpot.so vol.difpot.lo sou.dif lo.dif sou.ex lo.ex

The notion of context accommodation has been used in the past to deal with issues such as over-answering (see Cooper et al. (1999)), Question Under Discussion (qud) ((Ginzburg, 1996)), and for incorporating different plans (Cooper et al. (2000), Kreutel and Matheson (2000), Larsson et al. (2000)). Here we suggest the application of context accommodation to the hinting process. In tutorial dialogues the intelligent system can use context accommodation to decide whether the student is on the right track, or if the hinting process needs to start, by accommodating steps in a predefined order from a given plan. Three levels of planning are suggested here. At the top level domain holds a set of plans for all the lab tasks, each consisting of a set of subplans, called preconditions, that the total plan is broken down to.5 These have to be realised for the goal of the total plan to be fulfilled. Every precondition in turn is broken down into decompositions which represent steps in the possible reasoning process towards achieving the precon-

Finally, the Dialogue History (dh), which holds all the dialogue acts performed in one hinting session, and the system variable Latest Move (lm) communicate with the agenda via the update rules.6 Based on the latter, either the agenda will be modified, in which case the generation module will be activated, or not, in which case the agenda is only used for passively keeping track of the student’s step towards the realisation of the goal of the lab task at hand by popping preconditions off agenda and popping

5 Some of these plans already exist for the BE&E corpus using the same notation.

6 Update rules are domain-independent but genrespecific.

totaleffect:

Difference in charge between the two ends of the meter when attached to the circuit Voltage Lab: Measure Voltage(VDC)

plans off domain. Update rules are used to model everything described here as well as the accommodation of any preconditions performed and any decompositions which occur in a random order, whenever this is allowed by the task. This is the case only when certain steps, which constitute preconditions for the one currently being performed, have already been performed themselves. For instance, resetting the meter is a precondition of moving any wires when making a measurement. Context accommodation accommodates steps in the task, or the relevant reasoning, which are encountered before they are expected. It captures the fact that the student is following specific points in the tutor’s plan and there is no reason to go through the steps that cover them again explicitly. Example 4 shows the update rule for accommodating random steps, that is, correct steps (or preconditions) that are taken by the student in an order which differs from the order in the tutor’s plan: (4) name:

ACCOMMODATION OF RANDOM STEP

cond:

in(IS.LM,cor answer(S, Substep)) match agenda(Substep, Agenda, Domain) not(last member(AGENDA, cor answer(S, Substep))) push(IS.AGENDA,cor answer(Substep))

effect:

The conditions specified are: (i) that there is a correct answer that was just performed by the student (that would be the step to be accommodated) (ii) that the correct answer matches a subplan in agenda which is currently the one accommodated by domain (iii) that the substep at hand is not the final one in agenda.7 The effect will be that the substep will be pushed on top of the agenda, and the task can continue from there. 7

If it were, another update would fire because it would mean that the reasoning, or task, has been correctly completed.

6

An AVM representation

An attribute-value matrix (avm) representation of the dialogue in example (1) is shown in Figure 2. This represents the course of actions that would produce the last dialogue act by the tutor according to the update rules that formalise the algorithm. For current purposes the fields that are being used are agenda, generating the steps to be followed; obl, holding any actions that have been classified as obligations; int, which holds the actions to be realised in one turn here, and lm and dh, which stand for Latest Move and Dialogue History respectively, and which as mentioned above are used for capturing some aspects of the student model. For this reason dh includes all the previous dialogue acts, until the system comes across an update rule that specifically tells it to empty the dh. The Intelligent Tutor, and with it the hinting session, is activated by an information request from the student (S[1]). This is acknowledged explicitly and the hint relevant concept points the student in the correct direction (T[2]). With this prompt the student remembers something remotely relevant and gives a grain of truth answer (S[3]). That makes the tutor generate an acknowledgement again and a speak to answer hint as a follow-up (T[4]). This action is interrupted (S[5]) and continued again in T[6]. (There is no formal account of interruptions here). What should be a set amount of time goes by and the student does not respond at all, so the tutor’s interpretation is that the student does not follow, which for the algorithm is classified as a wrong answer. Hence, a logical relation is produced to give more profound guidance (T[7]). The student gives another wrong answer, (S[8]), and that is enough for the tutor to start explaining the current substep (T[9]). After this she checks for the origin of the problem (T[9]) in order to assess the student anew and perhaps generate an appropriate hint. In this case the student gives another wrong answer (S[10]). All these moves are held in dh, as shown, and as mentioned above lm is the latest move, here the student’s wrong answer. Together with dh they model the student performance, and based on the number of wrong answers the tutor will now direct the student to read the lesson again (T[11]). The student is obliged to do this with-

     AGENDA:      INT:      LM:     DH:      OBL: 

Figure 2: AVM representation of an Information State out negotiation, and the session will stop in the state following the one represented by this particular avm (after the point lesson intention has been performed). obl holds the obligation for the tutor to acknowledge the student’s answer; here this is done implicitly, but this is enough due to the special obligations. agenda holds the acts that must be produced next, based on the update rules, which become the tutor’s intentions as represented in int. These are the moves to be realised in the current turn.

7

Related Work

The BE&E project (Core et al., 2000) also assumes the trindi framework, and a plan based approach, but does not allow for partial order in the student steps, modelled here by Context Accommodation. The project employs multiturn tutorial strategies, some of which are motivated by similar theoretical interests to the ones presented here. However, the number of strategies is small and no emphasis is given to the way information is made salient, which is the aim of our taxonomy. The criteria for using one strategy over another are also not clear. Note that Core et al. (2000) contains descriptions of other Intelligent Tutoring Systems that cannot be included here due to lack of space. Miss Lindquist (Hefferman and Koedinger, 2000) also has some domain specific types of questions that resemble the BE&E strategies in form. Although there is mention of hints, and the notion of gradually revealing information by rephrasing the question is prominent, there is no taxonomy of hints or any suggestions for dynamically producing them. A detailed analysis of hints can also be found

in the CIRCSIM project, and in particular in the work of Hume et al. (1996). This paper has largely been inspired by the CIRCSIM work both for the general planning and for the taxonomy of hints, although the strategies recognised in it are domain specific.

8

Conclusion

An analysis of moves based on obligations and a taxonomy of hints is proposed. An algorithm formalising the hinting process based on the obligations and the taxonomy, and update rules which model the algorithm in accordance with the trindi project proposals, have been presented briefly. A suggestion has been put forward for applying context accommodation to hinting and thus modelling some aspects of the Intelligent Tutor. Future considerations include the integration of the move analysis into the hinting process, a reasoner for evaluating and categorising the student answers, a database as described above and, of course, the full implementation of the system, which we assume will provide a useful basis for evaluating the approach described here.

References Peter Bohlin, Robin Cooper, Elisabeth Engdahl, and Larsson Staffan. 1999. Information states and dialogue move engines. In Jan Alexanderson, editor, IJCAI-99 WOrkshop on Knowledge and Reasoning in Practical Dialogue systems. Robin Cooper, Staffan Larsson, Colin Matheson, Massimo Poesio, and David Traum. 1999. Coding instructional dialogue for infor-

mation states. Technical report, University of Gothenburg. Robin Cooper, Staffan Larsson, Elisabeth Engdahl, and Stina Ericsson. 2000. Accommodating questions and the nature of qud. In Proceedings G=F6talog2000. Mark G. Core, Johanna Moore, and Claus Zinn. 2000. Supporting constructive learning with a feedback planner. Technical report, Human Communication Research Center, University of Edinburgh, 445 Burgess Drive, Menlo Park CA 94025. Reva Freedman, Zhou Yujian, Michael Glass, Jung Hee Kim, and Martha W. Evens. 1998. Using rule induction to assist in rule construction for a natural-language based intelligent tutoring system. In Proceedings Twentieth Annual Conference of the Cognitive Science Society, pages 362–367, Madison. Jonathan Ginzburg. 1996. Dynamics and the semantics of dialogue. Language and Computation, 1. Neil T. Hefferman and Kenneth R. Koedinger. 2000. Building a 3rd generation its for symbolization: Adding a tutorial model with multiple tutorial strategies. In Proceedings of the ITS 2000 Workshop on Algebra Learning, Montreal, Canada. Gregory D. Hume, Michael A. Joel, Rovick A. Allen, and Martha W. Evens. 1996. Journal of the learning sciences. Hinting as a Tactic in One-On-One Tutoring, 5(1):23–47. Joern Kreutel and Colin Matheson. 2000. Incremental information state updates in an obligation-driven dialogue model. Language and Computation, 0(0):1–32. to appear. Staffan Larsson, Peter Bohlin, Johan Bos, and David Traum, 1999. Trindikit 1.0 manual. Staffan Larsson, Robin Cooper, and Elisabeth Engdahl. 2000. Question accommodation and information states in dialogue. In Third Workshop in Human-Computer Conversation, Bellagio. Colin Matheson, Massimo Poesio, and David Traum. 2000. Modelling grounding and discourse obligations using update rules. In Proceedings NAACL 2000, Seatle. Charles Rich and Candace L. Sidner. 1998. Collagen: A collaboration manager for software interface agents. User Modeling and UserAdapted Interaction, 8(3/4):315–350.

David R. Traum and James F. Allen. 1994. Discourse obligations in dialogue processing. In Proceedings 32nd Annual meeting of the Association for Computational Linguistics (ICSLP92), pages 1–8.

Suggest Documents