Intelligent fuzzy spelling evaluator for e-Learning

0 downloads 0 Views 263KB Size Report
1 Introduction. Advances in our power to compute using machines have over the years changed and is .... Reason questions (combines elements of MCQ and true-false), multiple response questions ..... The problem stated is practical and since it is designed for e-Learning systems .... StatisticalSpellCheckers.pdf. Johnson ...
Educ Inf Technol DOI 10.1007/s10639-014-9314-z

Intelligent fuzzy spelling evaluator for e-Learning systems Udit Kr. Chakraborty & Debanjan Konar & Samir Roy & Sankhayan Choudhury

# Springer Science+Business Media New York 2014

Abstract Evaluating Learners’ Response in an e-Learning environment has been the topic of current research in areas of Human Computer Interaction, e-Learning, Education Technology and even Natural Language Processing. The current paper presents a twofold strategy to evaluate single word response of a learner in an eLearning environment. The response of the learner to be evaluated would consist of errors committed due to lack of knowledge and also out of inadvertent mistakes committed while typing the answers. The proposed system benevolently considers such errors and still marks the learner partially. The feature incorporated in this work adds the human element to the mechanised system of evaluation and assessment in an e-Learning environment. Keywords e-Learning . Evaluation . Inadvertent error . Fuzzy automata . Fuzzy membership

1 Introduction Advances in our power to compute using machines have over the years changed and is continuously changing the way man interacts with the world. The field of education U. K. Chakraborty (*) : D. Konar Department of Computer Science & Engineering, SMIT, Majhitar, Rangpo, Sikkim (E), India e-mail: [email protected] D. Konar e-mail: [email protected] S. Roy Department of Computer Science & Engineering, NITTTR, Sector III, Salt Lake City, Kolkata, West Bengal, India e-mail: [email protected] S. Choudhury Department of Computer Science & Engineering, University of Calcutta, Kolkata, West Bengal, India e-mail: [email protected]

Educ Inf Technol

and academics has been no exception and has been as much part of the change as all other areas. The advances have resulted in education being delivered across geographical barriers, without the learner having to move out of his comfort zone. The expertise and knowledge of stalwarts is now accessible at the click of the button and difficult topics learnt at rates comfortable to one’s understanding, using technology driven and enabled learning medium, through what is better known as e-Learning. In spite of such huge developments and important milestones in technology, having been achieved, progress in the field e-Learning systems have largely stayed out of focus. The development in the area of e-Learning systems has largely been limited to multi-modal content delivery, web-enabled learning and distance learning. Consolidated models, implementing pedagogically correct learning methodologies acquired, over the years through experience and systematic observation, has not yet come to use. While such practice has improved the quality of learning experience for the students (Koohang and Durante 2003), the full potential is yet to be achieved and these are still early days for e-Learning. The most likely reason behind this constrained growth probably can most aptly described due to two powerful information processors (human and computer) attempting to communicate with each other via a narrow- bandwidth, highly constrained interface (Tufte 1989). It is only imperative now that the removal of the constraints from the mentioned ‘constrained interface’ would increase the effectiveness of HCI and e-Learning systems can only gain from such efforts. Efforts directed at removing the limitations and thereby increases the efficiency of e-Learning are being made. Claim has been laid towards successful achievement of the goal of dialog based tutoring and evaluation (Grasses et al. 2005), however, a lot is yet to be achieved in this field. While delivery of content is important in education, evaluation also forms a support pillar on which the effectiveness and credibility of the system of education relies heavily. No education system has ever been complete without assessment of the learners’ achievement of learning objectives and his evaluation by teachers’ through an established process. The process of assessment and evaluation in e-Learning also faces similar constraints as mentioned. The issues are further complicated by the fact that our efforts are concentrated towards reducing human interaction in the evaluation process. This necessitates the system to be smart enough to replace the human evaluator which amounts to having human like qualities, like knowledge of subject, knowledge of expression, knowledge of learners’ background and also the prudence to judge the learners’ ability and the benevolence to award him in spite of acceptable mistakes. Assessment and evaluation thus becomes a highly humane activity to be incorporated with a generalized pattern. The task becomes particularly complex as the communication between the learner and the evaluation system is carried out using natural languages. The work reported in this paper is aimed towards building a system intelligent enough to benevolently adapt to variations in learner response and deduce the correct meaning out of seemingly incorrect answers, much the same way a human evaluator would do. In doing this we believe that we overcome a few limitations of natural language based HCI. We use a Fuzzy Logic based approach to evaluate an answer on a scale of 0–1. As is obvious in fuzzy decision making, the outcome of the response evaluation is not limited to just correct and incorrect but also reports the degree of

Educ Inf Technol

correctness. The system reported in this work considers a scenario where the learner is facing questions in an examination, and all the questions have single word answers. The learner is expected to commit mistakes in answering the questions, where the mistakes would result out of lack of knowledge and also at times due to typographical errors or phonetic mis-interpretations. The evaluation system is expected to behave like a benevolent human evaluator and interpret the correct state of the learner, and award partial marks to him for such inadvertent errors.

2 Learners response in an e-learning environment A substantial number of papers have in current years reported research carried out in the field of HCI through natural language based textual interaction (Pulman and Sukkarieh 2005; Sukkarieh et al. 2003). NLP based techniques using POS taggers and various structured grammars like phase structures (Sukkarieh et al. 2003), report high complexity. Course content delivery apart, the marking procedure to evaluate the individual particularly stands out in this context. Sukkarieh et al. (Sukkarieh et al. 2003), reports some interesting results in automated marking systems. However, there are two points of contention on the acceptability of such systems, the first being the technological viability and the second and more important being the intelligence factor which allows the human evaluator to judge the true state of the learning individual. e-Learning provide scope for typed interactions under primarily two conditions, viz., dialog based interactions between the learner and the system, which simulate classroom learning where the learner and the teacher interacts (Graesser et al. 2001a, b, 2003) and in evaluation scenarios where the response of the learner is communicated through text. Interaction based teaching learning systems using natural language dialogs have been reported by Evens et al. (Evens et al. 2001). Graesser et al. (2001a) use a coordinated dialog based system where the learner and the pedagogical agent communicate over lesson content and world knowledge. The communicated content material is presented in a special script and latent sematic analysis is used to gauge the proximity of the topics. In (Graesser et al. 2003), Graesser, Hu & McNamara reports using a technique that is used to engage learners in mixed initiative dialog which assists him to find out a better answer for a question under a supervised learning scenario. In spite of all advances reported, HCI through typed-in text is still at a nascent stage. This is primarily because of the complexities, involved in language processing and our lack of knowledge about the language learning phenomenon. Comparatively, close ended and guided or computer assisted assessment of learning achievement is more commonly practiced in the present e-Learning scenario. Such phenomenon, like objective type questions seek to assess the learner based on his response to questions which have the answer options available in the test item itself. Close ended test items can be presented in various ways, namely, multiple choice questions (MCQ), fill-in the missing words, matching the columns etc. (Johnson et al. 1974). Some of the other broad categories of test types are True/False, AssertionReason questions (combines elements of MCQ and true-false), multiple response questions (MRQs) (similar to MCQs, but involve the selection of more than one answer from a list), graphical hot-spot questions (involving selecting an area(s) of the screen)

Educ Inf Technol

text/numerical questions (involving the input of text or numbers at the keyboard), column matching, sore finger questions, ranking questions, sequencing (to position text or graphic objects in a given sequence., etc. (Center 1999). The salient points contributing towards the popularity of MCQ’s may be its objectivity, quantifiability and userfriendliness and also that they provide scope for more effective feedback to be targeted (Ramesh et al. 2005). In spite of all such popularity enhancing features, the limitations of MCQ’s are glaring in comparison to text based question answer schemes. Through MCQ’s, it is difficult to ascertain the theoretical knowledge of the learner and also his understanding of the conceptual aspects of the subject. An MCQ with four options presents a one in four chance of ‘guessing’ the correct answer. This can effectively result in a person having 20 % correct answer knowledge of a subject scoring 40 % marks in a paper having MCQ’s with four options (Bush 1999). This apart, the usage of MCQ’s in Mathematical and related knowledge based subjects has also been debated (Abramovitz and Berezina 2004). To provide a more meaningful learning experience, or assessment of learning achievement, under an e-Learning environment, a learner must be offered the scope of interacting with the system through typed-in texts. The most popular approach to evaluating learners’ response to questions needing text based answers has been the Latent Semantic Analysis based techniques. A statistical-algebraic technique for extracting and inferring contextual usage of words in documents (Landauer et al. 1998), the LSA technique consists of first constructing a word-document co-occurrence matrix, scaling and normalizing it with a view to discriminate the importance of words across documents and then approximating it using singular value decomposition (SVD) in R dimensions (Bellegarda 2000), where a document can be anything ranging from a sentence to a paragraph or even a larger unit of text. Applications of LSA and its validation thereof can be found in the works of Graesser et al. (1999), who applied this in AutoTutor. In AutoTutor, LSA is used as the primary technique in the evaluation mechanism the grades the quality of learners’ response. Intelligent Essay Assessor (Foltz et al. 1999) uses LSA for evaluating essays written by learners’, which can form part of any content based course evaluation system. Many such other examples can also be sighted.

3 Intelligent response analysis It is only obvious that the better way to test a learners’ knowledge is to have him answer text based questions. The certainty of the limitations of MCQ’s notwithstanding, the popularity of the mode lies in the simplicity of implementation. The inherent complexities of natural language processing makes the task of text based answer evaluation difficult, so much so that it would probably be inconvenient for a human to even teach another to do so. Therefore, whatever method be employed, the evaluation of text based answers is not only computationally demanding but also cannot match the accuracy and expertise of a human tutor. The problem is augmented manifold if we consider the true picture as it happens in the real scenario, where the learner commits inadvertent mistakes while answering questions. Even if we eliminate the problem of illegibility, which definitely does not pose any issues under the e-Learning scenario, since the learner would be using the

Educ Inf Technol

keyboard, a variety of new issues come up. A partial classification of such and other related mistakes may be handled in suitable manner provided, the nature and amount of the error is known or can be predicted in advance (Gill and Greenhow 2007). But, such predictions and in fact all such evaluation schemes have been developed under the assumptions that, the learner answering the questions in-front of the terminal is perfect, both intellectually and emotionally. Predictive analysis of such problems becomes difficult as there are far too many factors influencing these problems and thus the predictive models fail to perform as good as human evaluators (Halverson 2006). A model proposed in (Shen et al. 2009) by Shen et al. performed tests using bio-sensors that measured emotions like: 1. 2. 3. 4. 5. 6. 7. 8.

interest engagement confusion frustration boredom hopefulness satisfaction disappointment

The parameters used for measurement included heart rate (HR), skin conductance (SC), blood volume pressure (BVP) and EEG brainwaves (EEG) . These tests revealed among other things that learners’ under stress are more prone to making mistakes than those who appear for tests in relaxed environments. Learners may commit mistakes that occur due to lack of knowledge, which are the primary types. However other reasons cannot be ruled out (Gray and Altmann 2001; Eason et al. 1955). Inadvertent errors like spellings and grammar errors may and do easily creep in under test situations. Such errors have a direct implication on the evaluation of the learner who might have misspelt a word that would be treated as incorrect answer even if it is merely a spelling error. Such situations and possible remedies thereof have been proposed and presented in our previous works (Chakraborty et al. 2010, 2011). This paper builds upon the same problem and proposes yet another evaluation scheme using Fuzzy Sets to handle such situations. The basic aim of the proposed system is to place the learner in his rightful state of knowledge or of ignorance in spite of inadvertent spelling errors committed by him which would otherwise be outright rejected by a standard spell check program. A spell checker works primarily on the principle of comparing the entered text with a dictionary and prompts the user with the correct spelling in case there is an error. However, in the scenario under consideration, where the user is a learner being evaluated for his answers, prompting the correct spelling would not be an option. The idea therefore is to accept the word as entered by the learner and then check its correctness. This paper proposes a scheme for intelligent analysis of learner’s response in an eLearning environment which is based on a cognitive model as shown in Fig. 1. The purpose is to implement the mapping q → x, so that the system is robust enough to withstand the petty errors that a learner is prone to commit during its typed-in textual dialogue with the system. The aim is to place the learner in his correct state of

Educ Inf Technol

Fig. 1 Cognitive model for intelligent recognition of learners’ response (Chakraborty et al. 2010)

knowledge or of ignorance, benevolently accepting, with deductions in marks awarded inadvertent mistakes in the form of spelling errors. The present work considers evaluating textual responses with spelling erors to questions requiring only single word responses. The work presented here addresses the same issue as presented in (Chakraborty et al. 2010, 2011), with the added complexity that the current work also handles learner response with length not matching with the length of the expected response. The problem addressed in this paper, thus, can be formulated as follows: Let r=a1 a2... an be a string of alphanumeric characters that corresponds to an expected response of a learner during an interaction between the learner and the system. There is a set A={s1, s2…, sn} of strings such that si ≠r for each i. Each si is a variation of the expected response r such that a human evaluator would interpret it as acceptable response even though endowed with some spelling mistake within a tolerable limit. The set B of unacceptable responses is given by B=∑−A−{r} where ∑ is the universe of the words of length n. It may be noted that both set A and B are unknown in the sense that the entire lists of acceptable and unacceptable responses are not given at the outset. Then, given an arbitrary n-character learner’s response x, the issue is how to establish the mapping:  Accept; if x ¼ r or x Є A f ð xÞ ¼ Reject; if x Є B A similar problem has been handled in (Chakraborty et al. 2010, 2011), with the consideration that the length of the typed in response of the learner is same as that of the actual answer. In other words, the previous works consider only two types of errors, viz., substitution: the→thw, transposition: the→hte but does not propose any solution for the other two types of commonly occurring errors i.e., insertion and deletion errors.

Educ Inf Technol

The current paper build upon the previous contributions and propose solutions to handling all the above mentioned four types of errors and their variants as listed in Table 1 (http://imm.dh.obdurodon.org/about/help). The present method is robust enough to handle errors even beyond the classical list of commonly committed ones, like omission of any number of letters from any position, instead of just beginning, end and middle. It can similarly also handle any number of extra characters at any position and also a combination of such errors and yet return an evaluation of the learners’ response.

4 Basic terminologies In the process of our discussions in the following sections we would be using some terms and associated concepts which we wish to introduce here. Formal Language: A formal language (Linz 2006) is a set of words, i.e. finite strings of letters, symbols, or tokens. The set from which these letters are taken is called the alphabet (Linz 2006) over which the language is defined. Formal Notions: 1. Alphabet: A set of symbols, indicated by V e.g., V={1, 2, 3, 4, 5,6, 7, 8, 9}). 2. String: A string over an alphabet, V, is a sequence of symbols belonging to the alphabet (e.g.,“ 518” is a string over the above V). 3. Linguistic Universe: Indicated by V*, denotes the set of all possible strings over V, including ⊡ (empty string). The set V+denotes the set V*-⊡. 4. Grammar: It gives a Generative perspective: It defines the way in which all admissible strings can be generated. Table 1 Types of errors handled by the proposed system and their descriptions Sl. no.

Type of errors

About

1

Apheresis

The omission of a letter from the beginning of the word. (For example: ‘nd’ instead of ‘and’)

2

Apocope

The omission of a letter from the end of the word. (For example: ‘an’ instead of ‘and’)

3

Epenthesis

The insertion of extraneous letters to a word. (For example: ‘Leess’ instead of ‘Less’)

4

Homophone

The substitution of words which sound the same but are spelled differently. (For example: ‘their’ instead of ‘there’)

5

Keyboarding

Errors resulting from typing too quickly or carelessly and hitting nearby keys. (For example: ‘exsmple’ instead of ‘example’)

6

Metathesis

The transposition of adjacent letters. (For example: ‘Caer’ for ‘Care’)

7

Phonetic

Words spelled in the way they are pronounced. (For example: ‘Sunami’ for ‘Tsunami’, ‘Bacup’ for ‘Backup’)

8

Syncope

The omission of a letter from the middle of a word. (For example: ‘Phne’ for ‘Phone’)

Educ Inf Technol

4.1 Automata An automaton (Linz 2006) is an abstract model of a Digital Computer. It consists of a number of internal states. The automata can transit from one such state to another based on the input and some predefined rules. 4.2 Fuzzy logic A form of many-valued which deals with reasoning that is approximate rather than fixed and exact as in case of traditional logic or crisp logic. Fuzzy (Wang et al. 2007; Fabian et al. 2006) provides a remarkably simple way to draw definite conclusions from vague, ambiguous or imprecise information. 4.3 Spell checker A Spell Checker (Hussain and Naseem 2013) is used to detect words in a document that may not be spelled correctly. It attempts to verify spelling (and sometimes grammar, a grammar checker) in a document. It scans the text, extract words and compare the extracted words against a known list of correctly spelled words. 4.4 Fuzzy finite state automata A fuzzy finite-state automaton (FFA) is a 6-tuple, defined as M= where Σ is a finite input alphabet and Q is the set of states; Z is a finite output alphabet, q 0 is an initial state, δ : Σ×Q×[0,1]→Q is the fuzzy transition map and ω :Q→Z is the output map (Wen and Min 2006).

5 Solution strategy The model test scenario consists of the learner and his machine (terminal), with the learner being presented with a set of questions appearing on the screen, all of which have single word answers. The set of all correct answers to these questions constitute the formal language for the test. The learner is expected to commit mistakes in answering where the mistakes may be due to improper knowledge, improper or careless typing or even mis-heard or incorrectly pronounced words. The aim of the system is to judge whether the learner is in a state of knowledge or ignorance and evaluate each answer. The marking scheme is expected to behave like a benevolent human evaluator and grade the learner on a scale of 0 to 1. Every question will have a threshold value and any score below the threshold value would be considered zero, while any value higher than the threshold would be added to the cumulative score of the learner. Since we consider the score even for incorrect spellings, it would be appropriate to keep a high threshold. In our experiments we keep the threshold for a particular question to be the average of all the individual scores for that question. Mthreshold ¼N

X i¼1

Mij



ð1Þ

Educ Inf Technol

where N is the number of students and ‘j’ is the serial number of the question under consideration. 5.1 Fuzzy finite state automata inspired approach The solution to the problem already stated is a through two layer approach, where the first layer consists of a Fuzzy Finite State machine, inspired by Fuzzy Automata approach, and the second layer comprising of a Fuzzy Membership Function based approach. In the first part of the designed solution, to every answer, we associate a Nondeterministic Finite Automata (NFA), which moves from its initial state to the final state, making transitions from one state to another by reading alphabets as they appear in the learners’ response. Every state transition is associated with a measured outcome, depending on the alphabet that has been encountered as input at that state. For any given word, at a given point in the transition, a given alphabet may be correct or incorrect. While correct alphabets, result in transitions with an outcome of unit value addition to the final score, an incorrect entry results in sub-unit value addition depending on the nature of error. Every alphabet in the alphabet set is associated with two sets of alphabets namely, Phonetically Valid Substitutions (PVSS) and Typographically Valid Substitutions (TVSS). The PVSS is built, keeping in mind general errors committed while answering and that words are spelt as they are pronounced, and consists of phonetically similar alphabets. The TVSS consists of alphabets adjacent to the alphabet under consideration in a QWERTY keyboard. If the expected input is not found, then the transition occurs with an outcome listed in the associated PVSS or TVSS, depending on whether the actual input belongs to any of these two sets of the expected input. If the actual input is neither the expected input not its PVS or TVS, the transition occurs with an outcome of zero addition to the final score Fig. 2. The current work considers every member of TVSS and PVSS of every alphabet to be equally contributory to the score and the weight of the outcome has been kept at 0.3. This however is arbitrary and may be changed depending on the importance of the character in the spelling or pronunciation of the word. The total score thus obtained is finally divided by the length of the correct answer to get the final score for the answer. An example is shown in Table 2. 5.2 Fuzzy function based positional error evaluation The response word is next evaluated using the second level of the evaluator which is a fuzzy function based module. This level considers a fuzzy function for each of the different types of errors and considers the positional impact of the error committed to grade the response. To identify the type of the error the length and the content of the

Fig. 2 Cognitive model for Fuzzy Automata Inspired Approach for the word ‘TIGER’

Educ Inf Technol Table 2 Sample evaluation of learners’ response at first level Expected response

PVSS

TVSS

Learners’ response

Transition outcome

Justification

Total score

(1+0.3+0.3+0.3)/ 4=0.475

F

{P}

{E,R,T,D,G,C,V,B}

D

0.3

Belongs toTVSS

A

{E}

{Q,W.S,Z,X}

A

1

Expected Input

I

{Y,E}

{U,O,J,K,L}

Y

0.3

Belongs to PVSS

L

Ø

{I,O,P,K}

K

0.3

Belongs toTVSS

response word is compared with the expected input and the proper function to be used identified. Each error type has a different fuzzy function as is discussed in this section. 5.2.1 Error of insertion (epenthesis) The error value ‘x’ is determined as: x¼

X

ðjSj−PÞ=jSj

ð2Þ

where |S| P

String length of the correct answer Position of the error The membership value for each error value x is given by: μðxÞ ¼ ðx=ðA−BÞÞ þ ðB=ðB−AÞÞ

ð3Þ

A ¼ −1=jSj

ð4Þ

B ¼ ðU*ð2jSj−U þ 1ÞÞ=2jSj

ð5Þ

where

and

The values of A and B represents the limits of the value that ‘x’ can take and has been calculated by considering the best case and worst case scenarios. For example if the correct answer to a question is ‘computer’ then we consider that the least error contribution is by the last character and therefore the best case with error is when the response is ‘computerψ’, where ‘ψ’ may be any character. Then ‘A’ becomes: A ¼ ðjSj−ðjSj þ 1ÞÞ=jSj ¼ −1=jSj The worst case scenario occurs when every alternate character is unwanted, e.g., ψ c ψ o ψ m ψ p ψ u ψ t ψ e ψ r, where ‘ψ’ can be any character. We consider an upper limit on the number of extra characters that an answer can have and still be accepted and fix it at: U ¼ 1:5*jSj

ð6Þ

Educ Inf Technol

Therefore B, the upper limit for ‘x’ becomes: B ¼ ΣðjSj−PÞ=jSj ¼ ððjSj−1Þ þ ðjSj−2Þ þ … þ ðjSj−UÞÞ=jSj ¼ ðU*ð2jSj−U þ 1ÞÞ=2jSj 5.2.2 Error of ommission (apheresis, apocope, syncope and others) The error value ‘x’ is determined as: X x¼ ðjSj−P þ 1Þ=jSj

ð7Þ

where |S| and P have their usual meanings as in Eq. 1. The membership value for each error value x is given by Eq. 2. Considering best case and worst case scenarios, A and B becomes: A ¼ 1=jSj

ð8Þ

B ¼ U*ð2jSj−U þ 1Þ=2jSj

ð9Þ

5.2.3 Error of transposition (metathesis) The error value ‘x’ is determined as: x¼

X

ðjSj−PÞ=jSj

ð10Þ

where |S| and P have their usual meanings as in Eq. 1. The membership value for each error value x is given by Eq. 2. Considering best case and worst case scenarios, A and B becomes: A ¼ 1=jSj

ð11Þ

B ¼ ðjSj−1Þ=jSj

ð12Þ

5.2.4 Error of substitution (homophone) The error value ‘x’ is determined as: X x¼ ðjSj−P þ 1Þ=jSj

ð13Þ

where |S| and P have their usual meanings as in Eq. 1. The membership value for each error value x is given by Eq. 2. Considering best case and worst case scenarios, A and B becomes: A ¼ 1=jSj

ð14Þ

B ¼ U*ð2jSj−U þ 1Þ=2jSj

ð15Þ

Educ Inf Technol

6 Results and discussions As elaborated, for a given question, the learners’ response is evaluated through a two level evaluation process. The first level is built on principles of Fuzzy Automata and considers the typographical and phonetic errors based on the valid typographical and phonetic sets of each alphabet. The later stage of the evaluator is built using fuzzy functions, which are used to evaluate the learners’ response keeping in consideration the type and number of errors committed. The final score is the average of the scores of the two independent stages. However, since the first level is unable to handle insertion and deletion errors, we consider only the scores of the second level in such cases. Tests conducted on a set of 250 words of varying lengths, having different types of errors at different positions and even with multiple errors show that the current method is better in handling insertion errors and words with multiple errors. It is also a worthy up gradation of the previous single stage approach as the errors of insertion and deletion, resulting in length variations in the learners’ response can now be handled. The results listed in Table 3 show six samples drawn from the set. The decision of acceptance or rejection of the score can be made based on Eq. 1 which has to be calculated as shown in Eq. 1.

7 Conclusion The present system implements evaluation of single word answers from an e-Learning perspective and keeps into consideration the benevolence displayed by an intelligent human evaluator. The problem stated is practical and since it is designed for e-Learning systems, our considerations were not limited to only the types of errors but also the ways that they occur and an intelligent scheme for such evaluation has been presented. The current method builds upon our earlier work and strengthens the basis of application. The augmentation of the system with the additional level using fuzzy function based evaluation technique, adds to the type of error that the system can handle. Errors which can be handled by both levels are better treated now, since we take the average of the score delivered by the independent units. The variable thresholding further adds to the flexibility of the system proposed as it can be tweaked depending on the level of hardness required. Table 3 Test results for intelligent fuzzy spelling evaluator Sl. no.

Expected response

Learners’ response

Score of level 1

Score of level 2

Final score

1

Automata

Autometa

0.912500

0.931400

0.921950

2

Compiler

Konpylar

0.650000

0.344830

0.497415 0.583814

3

Chair

Cheyt

0.580000

0.587629

4

Tree

Treee

0.000000

1.000000

1.000000

5

Formal

Formal

1.000000

1.000000

1.000000

6

Formal

Fomral

0.666667

0.500000

0.583333

Educ Inf Technol

References Abramovitz, B., & Berezina, M. (2004). Disadvantages of multiple choice tests and possible ways of overcoming them. The Mathematics Education into the 21st CenturyProject, The Future of Mathematics Education, Jun 26th–Jul 1st, 2004. Bellegarda, J. R. (2000). Exploiting latent semantic information in statistical language modelling. Proceedings of the IEEE, 88(8), 1279–1296. Bush, M. (1999). Alternative marking schemes for on-line multiple- choice tests. 7th Annual Conference on the Teaching of Computing, Belfast. CAA Center (1999) Designing effective objective test questions: an introductory workshop. Loughborough University, 17th Jun 1999. Chakraborty, U., K., & Roy, S. (2010). Neural network based intelligent analysis of learners’ response for an e-Learning environment. Proceedings of 2nd ICETC 2010, Vol. 2, pp. 333–338, Shanghai, China. Chakraborty, U., K., & Roy, S. (2011). Fuzzy automata inspired intelligent assessment of learning achievement. Proceedings of 5th IICAI 2011, pp. 1505–1518. Eason, G., Noble, B., & Sneddon, I. N. (1955). On certain integrals of Lipschitz-Hankel type involving products of Bessel functions. Philosophical Transactions of the Royal Society London, A247, 529–551 (references). Evens, M. W., Brandle, S., Chang, R.-C., Freedman, R., & Glass, M. (2001). CI R C S I M-Tutor: An intelligent tutoring system using natural language dialogue. 12th Midwest AI and Cognitive Science Conference, Oxford OH, pp. 16–23. Fabian, R., Craciunean, V., & Popa, E. M. (2006). Intelligent system modelling with total fuzzy grammars. Proc. of the 8th WSEAS Int. Conf. on Mathematical Methods and Com- putational Techniques in Electrical Engineering, Bucharest, October 16–17, 2006, pp. 82–87. Foltz, P. W., Laham, D., & Landauer, T. K. (1999). Automated essay scoring: Applications to educational technology. In Proc. of the ED-MEDIA’99 conference, Charlottesville: AACE. Gill, M., & Greenhow, M. (2007). Learning via online-mechanics tests: update and extension. Proceedings of The Science Learning and Teaching Conference 2007, 19–20 Jun 2007, England: Keele University, pp. 34–39. Graesser, A. C., Wiemer-Hastings, K., Wiemer-Hastings, P., & Kreuz, R. J. (1999). AutoTutor: a simulation of a human tutor. Cognitive Systems Research, 1(1), 35–51. Graesser, A. C., Person, N., Harter, D., & the Tutoring Research Group. (2001a). Teaching tactics and dialog in AutoTutor. International Journal of Artificial Intelligence in Education, 12, 257–279. Graesser, A. C., VanLehn, K., Rose, C., Jordan, P., & Harter, D. (2001b). Intelligent tutoring systems with conversational dialogue. AI Magazine, 22, 39–51. Graesser, A. C. et al. (2003). Why/AutoTutor: A test of learning gains from a physics tutor with natural language dialogue. In Proc. 25th AnnualConference Cognitive Science Soc., pp. 1–6. Graesser, A. C., Chipman, P., Haynes, B. C., & Olney, A. (2005). AutoTutor: an intelligent tutoring system with mixed-initiative dialogue. IEEE Transactions on Education, 48(4), 612–618. Gray, W.D. & Altmann, E.M. (2001) Cognitive modeling and human computer interaction. In W. Karwowski (Ed.), International encyclopedia of ergonomics and human factors, 1, 387–391. New York: Taylor & Francis, Ltd. Halverson, T. (2006). Integrating Models of human-computer visual interaction. Proceedings of Conference on Human Factors in Computing Systems-2006, ACM, pp. 1747–1750. Hussain, S., & Naseem, T. (2013). Spell checking. www.panl10n.net/Presentations/Cambodia/Sarmad/ StatisticalSpellCheckers.pdf. Johnson, W., Russel, S., Nicholas, A., & Clanton, E. S. (1974). Effects of alternative positioning of openended questions in multiple-choice questionnaires. Journal of Applied Psychology, 59(6), 776–778. Koohang, A., & Durante, A. (2003). Learners’ perceptions towards the web-based distance learning activities/ assignments portion of an undergraduate hybrid instructional model. Journal of Information Technology Education, 2, 105–113. Landauer, T. K., Foltz, P. W., & Laham, D. (1998). Introduction to latent semantic analysis. Discourse Processes, 25, 259–284. Linz, P. (2006). An introduction to formal languages and automata. Narosa, 4th Edition, ISBN: 8173194602. Pulman, S. G., & Sukkarieh, J. Z. (2005). Automatic short answer marking. Oxford University, Oxford, Workshop on Building Educational Applications Using NLP. Ramesh, S. Manjit Sidhu, S., & Watugala, G. K. (2005). Exploring the potential of multiple choice questions in computer based assessment of student learning. Malaysian Online Journal of Instructional Technology, 2(1), 1–15.

Educ Inf Technol Shen, L., Wang, M., & Shen, R. (2009). Affective e-learning: using “emotional” data to improve learning in pervasive learning environment. Educational Technology & Society, 12(2), 176–189. Sukkarieh, J. Z., Pulman, S. G., & Raikes, N. (2003) Auto-marking 2: An update on the UCLES-Oxford University research into using computational linguistics to score short, free text responses. http://www.cs. ox.ac.uk/files/237/AUTOMARKING2.htm. Tufte, E. R. (1989). Visual design of the user interface. Armonk: IBM Corporation. Wang, P. P., Ruan, D., & Kerre, E. E. (2007). Fuzzy logic: A spectrum of theoretical and practical issues. New York: Springer. ISBN 9783540712572. Wen, M. Z., & Min, W. (2006). Fuzzy automata induction using construction method. Journal of Mathematics and Statistics, 2(2), 395–400.