ACL-OCL / Base_JSON /prefixS /json /scil /2020.scil-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:38:45.428862Z"
},
"title": "Crosslinguistic Word Orders Enable an Efficient Tradeoff of Memory and Surprisal (Abstract)",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Hahn",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {}
},
"email": "mhahn2@stanford.edu"
},
{
"first": "Richard",
"middle": [],
"last": "Futrell",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"settlement": "Irvine"
}
},
"email": "rfutrell@uci.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "Memory limitations are well-established as a factor in human online sentence processing (Gibson, 1998 ; Lewis and Vasishth, 2005) , and have been argued to account for crosslinguistic word order regularities. For example, the Performance-Grammar Correspondence Hypothesis of Hawkins (1994) holds that forms which are easier to produce and comprehend end up becoming part of the grammars of languages. We build on expectationbased models of language processing (Levy, 2008) and on the theory of lossy compression (Cover and Thomas, 2006) to develop a highly general information-theoretic notion of memory efficiency in language processing, in terms of a trade-off of surprisal and memory usage. We derive a method for estimating a lower bound on the memory efficiency of languages from corpora, and apply our method to corpora from 54 languages to test the idea that word order is structured to reduce processing effort under memory limitations. We find that word orders tend to support efficient tradeoffs between memory and surprisal, suggesting that word order rules are structured to enable efficient online processing.",
"cite_spans": [
{
"start": 88,
"end": 101,
"text": "(Gibson, 1998",
"ref_id": "BIBREF2"
},
{
"start": 104,
"end": 129,
"text": "Lewis and Vasishth, 2005)",
"ref_id": "BIBREF6"
},
{
"start": 275,
"end": 289,
"text": "Hawkins (1994)",
"ref_id": "BIBREF4"
},
{
"start": 460,
"end": 472,
"text": "(Levy, 2008)",
"ref_id": "BIBREF5"
},
{
"start": 523,
"end": 536,
"text": "Thomas, 2006)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Background Surprisal theory (Levy, 2008) posits that the processing effort on a word w t in context w 1 . . . w t 1 is proportional to the surprisal of the word in context:",
"cite_spans": [
{
"start": 28,
"end": 40,
"text": "(Levy, 2008)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "S = log P(w t |w 1 . . . w t 1 ).",
"eq_num": "(1)"
}
],
"section": "",
"sec_num": null
},
{
"text": "Experimental work has confirmed that surprisal is a reliable and linear predictor of processing effort as reflected in reading times (Smith and Levy, 2013) . However, surprisal theory as presented above cannot in principle account for effects of memory limitations on online processing, because Equation 1 represents surprisal as experienced by an idealized listener who accurately remembers the entire history of previous words w 1...t 1 . More Figure 1 : Conceptual tradeoff between memory and surprisal for two languages. In Language A (blue), a listener storing 1 bit can achieve average surprisal 3.5, while the same level of surprisal requires 2 bits of memory for a listener in Language B (red).",
"cite_spans": [
{
"start": 133,
"end": 155,
"text": "(Smith and Levy, 2013)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 446,
"end": 454,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "realistically, human listeners deploy memory resources that maintain imperfect representations of the preceding context (Lewis and Vasishth, 2005; Futrell and Levy, 2017) . If m t is a listener's memory state after hearing w 1 . . . w t 1 , then the true surprisal experienced by the listener will be:",
"cite_spans": [
{
"start": 120,
"end": 146,
"text": "(Lewis and Vasishth, 2005;",
"ref_id": "BIBREF6"
},
{
"start": 147,
"end": 170,
"text": "Futrell and Levy, 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "S M := log 2 P(w t |m t ),",
"eq_num": "(2)"
}
],
"section": "",
"sec_num": null
},
{
"text": "which must be larger than Eq. 1 on average (Cover and Thomas, 2006) .",
"cite_spans": [
{
"start": 54,
"end": 67,
"text": "Thomas, 2006)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Memory-surprisal tradeoff. These considerations imply a tradeoff between memory and surprisal: A listener maintaining higher-precision memory representations m t will, on average, incur lower surprisal, at the cost of higher memory load. The idea of the memory-surprisal tradeoff is visualized in Fig. 1 : for each desired level of average surprisal, there is a minimum number of bits of information which must be stored about context. The shape of the trade-off is determined by the language, and in particular its word order: some languages enable more efficient trade-offs than others by forcing a listener to store more bits in memory to achieve the same level of average Figure 2 : Tradeoffs between memory (x axis) and surprisal (y axis) in 54 languages, for real orderings (blue) and counterfactual baseline grammars (red). We provide 95% confidence bands for different model runs on the real languages, and for the median across different baseline grammars.",
"cite_spans": [],
"ref_spans": [
{
"start": 297,
"end": 303,
"text": "Fig. 1",
"ref_id": null
},
{
"start": 676,
"end": 684,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Theoretical Results In Theorem 1 below, we derive a bound on the memory-surprisal tradeoff curve which can be easily estimated from corpora. Let I t be the conditional mutual information between words that are t steps apart, conditioned on the intervening words:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "surprisal.",
"sec_num": null
},
{
"text": "I t := I[w t , w 0 |w 1...t 1 ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "surprisal.",
"sec_num": null
},
{
"text": "This quantity measures how much predictive information the word t steps in the past contains about the current word. Theorem 1. Let T be a positive integer, and consider a listener using at most P T t=1 t I t bits of memory on average. Then this listener will incur average surprisal at least H[w t |w <t ] + P t>T I t . The theorem allows us to estimate the extra surprisal associated with each amount of memory capacity for a language. The quantities I t can be estimated as the difference between the cross-entropy of language models that have access to the last t 1 or t words. Given such estimates of I t , we estimate tradeoff curves as in Figure 1 by tracing out T = 1, 2, . . . .",
"cite_spans": [],
"ref_spans": [
{
"start": 646,
"end": 654,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "surprisal.",
"sec_num": null
},
{
"text": "We tested whether word orders as found in natural language grammars provide efficient memory-surprisal tradeoffs. To this end, we compared corpora of real languages against hypothetical reorderings of those languages under random baseline grammars. We used treebanks of 54 languages from Universal Dependencies 2.3 (Nivre et al., 2018) .",
"cite_spans": [
{
"start": 315,
"end": 335,
"text": "(Nivre et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": null
},
{
"text": "For each language, we constructed counterfactual word order rules by adapting the methodology of Gildea and Temperley (2010) to Universal Dependencies: For each syntactic relation (subject, object, ...) used in the treebank annotation, we randomly sampled its position relative to the head and other of siblings. For each language and each such set of rules, we reordered the treebank according to these counterfactual word order rules.",
"cite_spans": [
{
"start": 97,
"end": 124,
"text": "Gildea and Temperley (2010)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": null
},
{
"text": "For each language and its counterfactually ordered versions, we estimated the memorysurprisal tradeoff (Theorem 1) using an LSTM recurrent neural language model, considering all integers T = 1, . . . , 20. Hyperparameters were tuned, for each language, to minimize average cross-entropy on counterfactual versions, introducing a conservative bias against our hypothesis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": null
},
{
"text": "Tradeoff curves are shown in Figure 2 . In 50 out of 54 languages, the observed orderings led to more favorable tradeoffs than 50% of the counterfactual orderings (p < 0.0001; Exceptions: Latvian, North Sami, Polish, and Slovak).",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 37,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": null
},
{
"text": "Taken together, our results suggest that, across languages, word order in part reflects pressures towards efficient online processing under memory limitations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Elements of Information Theory",
"authors": [
{
"first": "M",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "J",
"middle": [
"A"
],
"last": "Cover",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thomas",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas M. Cover and J.A. Thomas. 2006. Elements of Information Theory. John Wiley & Sons, Hoboken, NJ.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Noisy-context surprisal as a human sentence processing cost model",
"authors": [
{
"first": "R",
"middle": [],
"last": "Futrell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2017,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R Futrell and R Levy. 2017. Noisy-context surprisal as a human sentence processing cost model. In EACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Linguistic complexity: locality of syntactic dependencies",
"authors": [
{
"first": "E",
"middle": [],
"last": "Gibson",
"suffix": ""
}
],
"year": 1998,
"venue": "Cognition",
"volume": "68",
"issue": "1",
"pages": "1--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Gibson. 1998. Linguistic complexity: locality of syntactic dependencies. Cognition, 68(1):1-76.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Do Grammars Minimize Dependency Length",
"authors": [
{
"first": "D",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Temperley",
"suffix": ""
}
],
"year": 2010,
"venue": "Cognitive Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D Gildea and D Temperley. 2010. Do Grammars Min- imize Dependency Length? Cognitive Science.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A performance theory of order and constituency",
"authors": [
{
"first": "J",
"middle": [
"A"
],
"last": "Hawkins",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "JA Hawkins. 1994. A performance theory of order and constituency.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Expectation-based syntactic comprehension",
"authors": [
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2008,
"venue": "Cognition",
"volume": "106",
"issue": "3",
"pages": "1126--1177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roger Levy. 2008. Expectation-based syntactic com- prehension. Cognition, 106(3):1126-1177.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An activation-based model of sentence processing as skilled memory retrieval",
"authors": [
{
"first": "R",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vasishth",
"suffix": ""
}
],
"year": 2005,
"venue": "Cognitive Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R Lewis and S Vasishth. 2005. An activation-based model of sentence processing as skilled memory re- trieval. Cognitive Science.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The effect of word predictability on reading time is logarithmic",
"authors": [
{
"first": "Nathaniel",
"middle": [
"J"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2013,
"venue": "Cognition",
"volume": "128",
"issue": "3",
"pages": "302--319",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathaniel J. Smith and Roger Levy. 2013. The effect of word predictability on reading time is logarithmic. Cognition, 128(3):302-319.",
"links": null
}
},
"ref_entries": {}
}
}