Incremental processing vs. holistic processing ... - Stanford University

30 downloads 0 Views 694KB Size Report
Holistic processing (HP):. • Comprehenders process MWS independently of representations compatible with partial input. (Arnon & Snider 2010 et seq.,.
Re-modeling incremental and holistic processing in multi-word comprehension Aleksander Główka Department of Linguistics, Stanford University [email protected]

Incremental processing vs. holistic processing Incremental processing (IP): Holistic processing (HP): • Comprehenders weigh target • Comprehenders process MWS multi-word string (MWS) against independently of representations all representations compatible compatible with partial input with accruing evidence from (Arnon & Snider 2010 et seq., unfolding input (e.g. Levy 2008); Christiansen & Chater 2016); • MWS frequency is not meaningful • MWS frequency is meaningful on on its own, only when weighted its own, above & beyond MWS against partial input of MWS. internal distributional information. Research question: can incremental & holistic processing both non-redundantly constrain MWS comprehension?

Towards a complete characterization of IP: syntagmatic & paradigmatic competition • Various incremental measures proposed (e.g. conditional probability, surprisal, logit, informativity, mutual information). • Lack of unified characterization of IP precludes direct comparison of IP & HP. • I propose a two-dimensional competitive characterization of IP. • Along syntagmatic axis, the target competes with its subchunks. • Along paradigmatic axis, it competes with partially overlapping MWS.

Building hypothesis-driven conjunctive models using multi-model inference & bagging Psycholinguistic account

Critical predictors

I.

Radical holistic

Frequency

II.

Radical dense competition

Paradigmatic Competition Index Syntagmatic Competition Index

III.

Radical sparse competition

Negative Conditional Probability Negative Mutual Information

IV.

Radical heterogeneous competition A

Negative Conditional Probability Syntagmatic Competition Index

V.

Radical heterogeneous competition B

Paradigmatic Competition Index Negative Mutual Information

VI.

Dense dual stream

Frequency Paradigmatic Competition Index Syntagmatic Competition Index

VII.

Sparse dual stream

Frequency Negative Conditional Probability Negative Mutual Information

VIII.

Heterogeneous dual stream A

Frequency Negative Conditional Probability Syntagmatic Competition Index

IX.

Heterogeneous dual stream B

Frequency Paradigmatic Competition Index Negative Mutual Information

Model rankings by bagged AIC (B=1000)* Arnon & Snider (2010)

Tremblay & Tucker (2011)

*Whiskers indicate 95% confidence intervals. Note that confidence intervals are used here for visualization purposes only. Hypothesis testing tools used in frequentist statistics are not interpretable in multi-model inference.

Generalizing processing measures: dense & sparse competition

Take-away points • Multi-word comprehension non-redundantly integrates offline holistic exemplars and incremental hypotheses weighted online. • Findings support multilevel exemplar models of sentence comprehension: chunking yields exemplar clouds of different complexity (Walsh et. al 2010). • Unified account of incremental processing: established incremental processing measures, mutual information & conditional probability, can be treated as special (i.e. sparse) cases of more general processing streams, syntagmatic & paradigmatic competition respectively. • Incremental competition density: sparse competition measures are on average preferred to their dense counterparts; may be due to behavioral tasks used (i.e. MWS presented all at once rather than incrementally).

Re-modeling two existing MWS datasets Arnon & Snider (2010) • Stimuli: 90 four-grams from Switchboard & Fisher • Conditions: 4 non-discrete frequency bins (stimuli continuous across frequency spectrum) • Task: phrasal-decision (phrases vs. word scramble) • Participants: 49 native American English speakers

Tremblay & Tucker (2011) • Stimuli: 432 four-grams from BNC (112 most frequent + 320 randomly selected) • Stimuli continuous across frequency spectrum • Task: unprepared production task (comprehension approximated by RTs) • Participants: 17 native American English speakers

Acknowledgements & selected references Acknowledgements: I thank Inbal Arnon, Neal Snider, and Antoine Tremblay for sharing their data with me and answering related questions. I thank Joan Bresnan, Meghan Sumner, and Tom Wasow for their sustained advice on this project. All errors are mine. Selected references: Arnon, I. & Snider, N. (2010). More than words: Frequency effects for multi-word phrases. Journal of Memory and Language 62 (1), 67-82. Christiansen, M. H. & Chater, N. (2016). The Now-or-Never bottleneck: A fundamental constraint on language. Behavioral and Brain Sciences 39, 1-72. Burnham K. & Anderson, D. (2002). Model Selection and Multimodel Inference. Springer: New York. Efron, B. & Tibshirani, R. (1993). An Introduction to the Bootstrap. Boca Raton, FL: Chapman & Hall/CRC. Kazerounian, S.& S. Grossberg (2014). Real-time learning of predictive recognition categories that chunk sequences of items stored in working memory. Frontiers in Psychology 5, 1-28. Levy, R. (2008). Expectation-based syntactic comprehension. Cognition 106 (3), 1126-1177. Tremblay, A. & Tucker, B (2011). The effects of N-gram probabilistic measures on the recognition and production of fourword sequences. The Mental Lexicon 6 (2), 302-324. Walsh, M., B. Möbius, T. Wade, H. Schütze. (2010). Multilevel exemplar theory. Cognitive Science 34, 537-582