structured language modeling for speech ... - Semantic Scholar
Recommend Documents
A new language model for speech recognition is presented. The model ... 1 Structured Language Model. An extensive .....
Beskow, 1995; Massaro, 1998), but a talking head only moving its tongue and lips is ... House & Swerts 2002; Pelachaud, Badler, & Steedman, 1996). Address ...
A 3-gram approach would predict barked from. (heard, yesterday) whereas it is clear that the. predictor should use the word dog which is out-. side the reach of ...
IBM TJ Watson Research Center PO Box 218 Yorktown Heights, NY 10598. {erdogan ..... Test2 is a subset of calls received from 8 different sites during the same.
Phonetic versus attribute definitions ... Automatic Speech Attribute Transcription ..... S. Greenberg, and G. Meyer (eds), NATO Science Series, IOS Press, 2006.
navigation and many other application domains. In these cases, the speech ... In the last decade the availability of high powered affordable personal computers.
Rosanne Price1, Nectaria Tryfona2, Christian S. Jensen2 ... combined and applied at different levels (i.e., attribute, association, object class) in the object-oriented ... hospital (e.g., general, maternity), where the relevant regulations determini
thank Kentaro Inui, Kiyoaki Shirai and Timothy Baldwin enough. ..... by Briscoe and Carroll 5] and a baseline model using a probabilistic context-free gram-.
We would like to thank Robbie Haertel, Eric Roundy, and. Merrill Hutchison for valuable programming ... Additional authors: Damian Lyons (Fordham University,.
In the field of computer support for collaborative work (CSCW), some works ..... as international negotiations or business-to-business contract establishment.
Saginaw, MI 48601. Ronald C. ... Michigan State University. East Lansing, MI ..... One is by keyword; one is by constraint type, and one is by generation history.
of semantic language refinement meaning that one semantics variant is implied by ... Model-based software development is regarded as one instrument to cope.
this case probably person will be the pater familias, but there is no way for the analyst to derive this from the modeling dialog sofar. On the other hand, if.
Developmental Psychology ... developmental reorganization, the results indicate that infant speech perception is ... DEVELOPMENT OF SPEECH PERCEPTION.
Speaker-trained name dialer is today probably one of the most widely distributed ASR applica- tions in the world. Despite the fact that more advanced speech.
Sep 6, 2015 - We propose a novel speech recognition method which is multiple-pass decoding us- ... tongues of users to be unknown and various. However ...
Feb 28, 2012 - 3. Language Technologies in Speech-Enabled Second Language Learning Games: From Reading to Dialogue. By.
Sep 4, 2002 - a href="http://www.cs.utk.edu/~bartley/other/ISA.html. 3. Incest Survivors ... The final directory name is the user's email alias. 3. The penultimate ...
Database Design. . John Smith. B. . . Query 3: Extract courses with ...
PRONUNCIATION MODELING IN SPEECH SYNTHESIS. Corey Andrew Miller. A DISSERTATION in ... George Cardona. Graduate Group Chairperson ...
... Cross-Linguistic Evidence. Natalie D. Snoeren ([email protected]) ... Gaskell and Marslen-Wilson (1996) examined the influence of assimilation on ...
Aug 12, 2008 - ''trainedâ or tailored to an individual's voice and speech pattern. ..... Therefore, custom language models based upon the specific modality, body site, and radiologist could be used by the speech recognition software to im-.
ers in these languages (Cutler and Norris, 1988; Vroomen et al., 1998). ...... Prosody, edited by E. GÃ¥rding, G. Bruce, and R. Bannert (Department of. Linguistics ...
We believe the development of animated agents is a worthy pursuit ... communicate best in face-to-face situations because we are able to .... The toolkit's Rapid Application Developer. (RAD) provides the ..... New York, Cambridge. University ...
structured language modeling for speech ... - Semantic Scholar
20Mwds (a subset of the training data used for the baseline 3-gram model), ... it assigns probability to word sequences
STRUCTURED LANGUAGE MODELING FOR SPEECH RECOGNITION ERRATAy Ciprian Chelba and Frederick Jelinek Abstract We present revised Wall Street Journal (WSJ) lattice rescoring experiments using the structured language model (SLM).
1 Experiments We repeated the WSJ lattice rescoring experiments reported in [1] in a standard setup. We chose to work on the DARPA'93 evaluation HUB1 test set | 213 utterances, 3446 words. The 20kwds open vocabulary and baseline 3-gram model are the standard ones provided by NIST. As a rst step we evaluated the perplexity performance of the SLM relative to that of a deleted interpolation 3-gram model trained under the same conditions: training data size 20Mwds (a subset of the training data used for the baseline 3-gram model), standard HUB1 open vocabulary of size 20kwds; both the training data and the vocabulary were re-tokenized such that they conform to the Upenn Treebank tokenization. We have linearly interpolated the SLM with the above 3-gram model: P () = P3gram () + (1 ; ) PSLM ()
showing a 10% relative reduction over the perplexity of the 3-gram model. The results are presented in Table 1. The SLM parameter reestimation procedure1 reduces the PPL by 5% ( 2% after interpolation with the 3-gram model). The main reduction in PPL comes however from the interpolation with the 3-gram model showing that although overlapping, the two models successfully complement each other. The interpolation weight was determined on a held-out set to be = 0:4. Both language models operate in the UPenn Treebank text tokenization. A second batch of experiments evaluated the performance of the SLM for 3-gram2 lattice decoding. The lattices were generated using the standard baseline 3-gram language model This work was funded by the NSF IRI-19618874 grant STIMULATE Due to the fact that the parameter reestimation procedure for the SLM is computationally expensive we ran only a single iteration 2 In the previous experiments reported on WSJ we have accidentally used bigram lattices y
trained on 40Mwds and using the standard 20kwds open vocabulary. The best achievable WER on these lattices was measured to be 3.3%, leaving a large margin for improvement over the 13.7% baseline WER. For the lattice rescoring experiments we have adjusted the operation of the SLM such that it assigns probability to word sequences in the CSR tokenization and thus the interpolation between the SLM and the baseline 3-gram model becomes valid. The results are presented in Table 2. The SLM achieved an absolute improvement in WER of 0.8% (5% relative) over the baseline despite the fact that it used half the amount of training data used by the baseline 3-gram model. Training the SLM does not yield an improvement in WER when interpolating with the 3-gram model, although it improves the performance of the SLM by itself. Lattice Trigram(40Mwds) + SLM 0.0 0.4 1.0 WER, initial SLM, iteration 0 14.5 12.9 13.7 WER, reestimated SLM, iteration 1 14.2 13.2 13.7 Table 2: Test Set Word Error Rate Results
References [1] Ciprian Chelba and Frederick Jelinek. Structured language modeling for speech recognition. In Proceedings of NLDB99. Klagenfurt, Austria, 1999.