| { |
| "paper_id": "Y18-1047", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:36:25.149302Z" |
| }, |
| "title": "Minimalist Parsing of Heavy NP Shift", |
| "authors": [ |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Stony Brook University", |
| "location": {} |
| }, |
| "email": "lei.liu.1@stonybrook.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper studies Heavy NP Shift (HNPS) from the perspective of parsing using Minimalist Grammar. Based on memory usage of the MG parsers, processing difficulties of HNPS as derived by rightward movement, PP movement and remnant movement are each compared with a non-movement structure. A set of complexity metrics show that shifted structures are indeed easier to parse than a non-movement structure when the NP is long.", |
| "pdf_parse": { |
| "paper_id": "Y18-1047", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper studies Heavy NP Shift (HNPS) from the perspective of parsing using Minimalist Grammar. Based on memory usage of the MG parsers, processing difficulties of HNPS as derived by rightward movement, PP movement and remnant movement are each compared with a non-movement structure. A set of complexity metrics show that shifted structures are indeed easier to parse than a non-movement structure when the NP is long.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Heavy NP Shift (HNPS) refers to the tendency that long or phonologically \"heavy\" phrases are shifted to positions other than where they canonically occur. An English HNPS sentence is shown in (1a). The canonically word-ordered or \"unshifted\" version of (1a) is the sentence in (1b). When the object NP is short, however, shifted word order is marked, as shown in (1c).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Popular analyses of HNPS include: rightward movement of NP (Ross 1986) , where the heavy NP moves to the right edge of the constituent; the PP movement analysis (Kayne 1994) , where the PP leftward moves; and the remnant movement analysis (Rochemont and Culicover 1997) , where the heavy NP moves first, followed by movement of the \"remnant\" VP. The above analyses are schematized in (2-4) respectively. These syntactic analyses, distinct as they are, are equally successful in deriving the English HNPS word order. However, since structural properties of a sentence predicts how hard it is for humans to process it, it is unclear what processing predictions these analyses make, nor is it clear whether these predictions are born out in observed human processing preferences.", |
| "cite_spans": [ |
| { |
| "start": 59, |
| "end": 70, |
| "text": "(Ross 1986)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 161, |
| "end": 173, |
| "text": "(Kayne 1994)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 239, |
| "end": 269, |
| "text": "(Rochemont and Culicover 1997)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Psycholinguistic studies on human sentence processing have shown that sentences with HNPS word order are preferred in production over the canonical word order when the NP is long (Stallings et al. 1998) . Additionally, it has been observed that the likelihood of shifting heavy NPs relates not only to the length of NPs, but of PPs as well. As the length of a PP increases, i.e., as the length difference between the NP and the VP decreases, HNPS is less likely to happen (Stallings and MacDonald 2011) . It is then interesting to explore whether and how well these psycholinguistic findings are predicted by a given structural analysis.", |
| "cite_spans": [ |
| { |
| "start": 179, |
| "end": 202, |
| "text": "(Stallings et al. 1998)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 472, |
| "end": 502, |
| "text": "(Stallings and MacDonald 2011)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Minimalist Grammar (MG) parsing (Stabler 2013 , Graf et al. 2017 ) provides a quantitative way to answer precisely these questions. As it will hopefully become clear, It is possible to infer and compare processing difficulties that associate with syntactic structures by observing the parser's behavior when conjecturing those structures. This enables us to see whether the reported human processing findings are expected when a certain syntactic structure is assumed.", |
| "cite_spans": [ |
| { |
| "start": 32, |
| "end": 45, |
| "text": "(Stabler 2013", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 46, |
| "end": 64, |
| "text": ", Graf et al. 2017", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, I investigate processing predictions the three aforementioned structural proposals make from the perspective of Minimalist parsing. I will show that the parser's behavior suggest that the rightward movement analysis correctly predict processing biases based on memory usage. PP movement and remnant movement analyses make correct predictions when unpronounced nodes are ignored by the memory usage calculation. Moreover, when contrasted with previous studies on complexity metrics (Graf et al. 2015 , Zhang 2017 , Graf et al. 2017 , the same set of metrics that isd making correct processing predictions for relative clauses works for HNPS structures as well.", |
| "cite_spans": [ |
| { |
| "start": 496, |
| "end": 513, |
| "text": "(Graf et al. 2015", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 514, |
| "end": 526, |
| "text": ", Zhang 2017", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 527, |
| "end": 545, |
| "text": ", Graf et al. 2017", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This paper is structured as follows. Section 2 introduces Minimalist Grammars, parsers as well as complexity metrics. Section 3 discusses how the comparisons are set up and what the results are. I conclude the paper with discussions in section 4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The evaluation and comparisons of how difficult given syntactic structures are processed are based on complexity metrics measuring behaviors of MG parsers. In this section, I first discuss Minimalist Grammars and MG parsers. Complexity metrics will then be introduced with examples.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimalist Grammars, Parsers and Complexity Metrics", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Minimalist Grammars are grammar formalisms based on the Minimalist Program (Chomsky 2014) . The formalism is mathematically defined in Stabler (1996) , Graf (2012) . Intuitively, MG rules are expressed in lexical items, which are essentially feature bundles containing information such as pronunciation, category, movement, etc. Similar to the standard Minimalist Program-styled derivation, these lexical items are built into sentences (trees) via merge, which combines lexical items and/or phrases; and move, which regulates movements. For a concrete example, lexical items for an English sentence Max packed boxes are listed in (5) while an MG derivation tree (modulo features on nodes) is shown in (6), next to a standard syntactic tree in (7).", |
| "cite_spans": [ |
| { |
| "start": 75, |
| "end": 89, |
| "text": "(Chomsky 2014)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 135, |
| "end": 149, |
| "text": "Stabler (1996)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 152, |
| "end": 163, |
| "text": "Graf (2012)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MG and Its Parser", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "(5) Max :: D \u2212 cat. nom \u2212 mvmt packed :: D + sel. V \u2212 cat.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MG and Its Parser", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "boxes :: D \u2212 cat.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MG and Its Parser", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "C :: T + sel. C \u2212 cat. T :: v + sel.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MG and Its Parser", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "nom", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MG and Its Parser", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "+ mvmt T \u2212 cat. v :: V + sel. D + sel. v \u2212 cat. (6) CP(merge) C TP(move) TP(merge) T vP(merge) Max v'(merge) v VP(merge) packed boxes (7) CP C TP Max T' T vp Max v' v VP packed boxes", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MG and Its Parser", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In MG, a derivation tree such as (6) is built from lexical items such as (5). Take the bottom-most V P in (6) for instance. The verb packed has a sel(ection) features marked for D + . This means it must merge with a feature matching item, in this case, boxes, which is marked as D \u2212 . The product of this merge is a V P , which is feature marked as V \u2212 , the remaining feature on the head packed after D + and D \u2212 \"checks\" each other out.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MG and Its Parser", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "An MG parser essentially does merge and move in reverse. It takes a set of MG rules, a sentence, and conjectures derivation trees in a recursive-descend fashion. Again, take the sentence Max packed boxes for example. An annotated derivation tree outlining the parser's behavior is shown in (8). The numbers on the two corners of each nodes in (8) indicate steps at which the node is conjectured (superscripted numbers, or indices) and confirmed (subscripted numbers, or outdices) by the parser. In the example, the parser starts building the derivation tree from a CP (step 1-2); \"un-\"merges the CP into C and T P with an EPP movement landing site (step 2-5). Next, it \"un-\"merges T P into T and vP (step 5-8). The box on 8 indicates that the T node has been kept in the memory for non-trivial steps (steps > 2). According to the feature specification, T node could only be confirmed after all its features are checked. In this case, the movement licensor feature nom + is not checked until the the parser confirms the mover, Max, at step 7.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MG and Its Parser", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Using the indices and outdices on a derivation tree, it is possible to infer memory usage of the parser when conjecturing that tree. Specifically, following Graf et al. (2017) , one can measure how long a node is kept in memory, or Tenure; how many nodes are kept in memory, or Payload and how long movement dependencies stretch, or Size. Take the same annotated tree in (8) for example. The node T has a Tenure of 3 (= 8 \u2212 5), which means it was kept in memory for 3 steps, as briefly mentioned before.", |
| "cite_spans": [ |
| { |
| "start": 157, |
| "end": 175, |
| "text": "Graf et al. (2017)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Complexity Metrics", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The whole tree has a Payload of 2, which is the number of non-trivial nodes, namely, T and v , as indicated by the boxes on the outdices. The only movement in the tree has a Size of 2, which is calculated by subtract the step at which the landing site is confirmed (4 in this case) from the step at which a mover is conjectured (6 in this case). From these notions, one can build a set of met-rics quantify how how much more memory-usage intense a structure P is than Q (i.e., how much harder to parse it should be). Each metric predicts that a structure P is harder than Q if certain conditions are met. For example, MaxT predicts that P is harder to parse than Q when the maximum tenure among all tenured nodes in P is greater than that in Q; SumS predicts that P is harder to parse than Q when the sum of lengths of all the movements in P is greater that in Q; and MaxS R makes the same prediction about P and Q when the furthest movement dependency in P is great than that in Q, while any ties between the two structures are ignored. The total number of base metrics used for this study is 20, among which the above three in the examples are arguably the most reliable ones, as we will see in the results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Complexity Metrics", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "3 Comparisons and Results", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Complexity Metrics", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Now that the metrics are defined, we can measure how difficult a proposed HNPS structure is compared to a canonical structure given the memory usage of the MG parser when conjecturing these structures. The comparisons are set up by specifying three parameters: a) target sentences, b) human processing biases of those sentences found in experiments, and c) syntactic structures of the target sentences. The target sentences are constructed by controlling the length of DPs and PPs of the sentences (long and short, 2 \u00d7 2 = 4), and the word order (DP before P P and P P before DP ). A total of four pairs of sentences are used in the comparisons. (9-12) show the DP before P P examples of the pairs. Human processing difficulties of the sentences of the above four types have been reported in experimental and theoretical literature. The HNPS word order is preferred over the canonical word order with long NPs and short PPs (ldp_spp) (Stallings et al. 1998) . As the length of PPs increases (ldp_lpp), the HNPS word order is no longer preferred (Stallings and MacDonald 2011) . Additionally, for the sentences with short DPs, the shifted word order is ungrammatical (Ross 1986) Syntactic proposals deriving each sentence with shifted word order were rightward movement, PP movement and remnant movement, as discussed earlier. They are compared in a pairwise fashion with no movement, which derives canonical word order. A total of 12 comparisons are conducted (4 phrase lengths \u00d7 3 syntactic proposals = 12). Each comparison asks the question whether the metrics can predict reported processing difficulties across sentence types given a certain analysis. For instance, if it is rightward movement that is currently in question, the comparison is setup such that, for the ldp_spp condition, i.e., the HNPS configuration, the rightward movement structure is easier to parse than the no movement structure. For the remaining three conditions, the no movement structure is easier to parse. And we test how successful the metrics are in predicting these processing biases. A collection of Python scripts are used for the comparisons, taking as input pairs of syntactic structures, processing bias of these pairs, and a set of complexity metrics; and outputs whether the set of metrics are successful, unsuccessful, or neutral in predicting those bias.", |
| "cite_spans": [ |
| { |
| "start": 934, |
| "end": 957, |
| "text": "(Stallings et al. 1998)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 1045, |
| "end": 1075, |
| "text": "(Stallings and MacDonald 2011)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 1166, |
| "end": 1177, |
| "text": "(Ross 1986)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Setting Up Comparisons", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Results of the comparisons show that, first, the MG parser's behavior predicts that HNPS sentence is less difficult to parse than its canonically ordered counterpart, as expected. Recall that among the four length conditions, a shifted structure is predicted to be easier in the pair only for the ldp_spp condition. 8 out of 10 tenure based metrics were able to predict this processing bias for rightward movement analysis. The performance of each of the 20 metrics in the twelve conditions can be found in Appendix A.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Relevant annotated derivation trees confirms these results. For the heavy NP condition (ldp_spp), if the heavy NP does not move, the parser would have to fully build the heavy NP part until it can go back to the earlier branch to continue work on the PP. This causes a greater tenure on the V' node as shown in (14) . In contrast, rightward movement essentially delays the heavy lifting of building the NP. Since the size of PP, or of the right branch, is much smaller than its left branch, the tenure on the left branching node is smaller than that on the right branch of a canonical structure, as shown in (15). 14. It is also not difficult to see from above that as the right sibling of the heavy NP, or in fact, the lower P P grows in length, the shifted order would no longer be preferred by the same complexity metrics. Comparison results from the ldp_lpp condition show exactly this, as demonstrated by a rightward movement case in 16 Recall that for a rightward movement structure, the parser builds the right branch before it returns to the left branch, the DP that has rightward moved. When the PP is also long, the tenure on the left branching DP node increases as a result. And in this particular case, the shifted structure (16), is no longer preferred in terms of memory usage when compared to the canonical structure in (14), because of the greater tenure on the DP node.", |
| "cite_spans": [ |
| { |
| "start": 311, |
| "end": 315, |
| "text": "(14)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Second, the results suggest that the PP movement and remnant movement analyses also predict processing advantage of HNPS when unpronounced nodes are ignored. 7 out of 10 and 8 out of 10 tenure-based filtered metrics were successful in predicting processing biases for the PP movement and remnant movement analyses respectively. A performance summary of the metrics can be found in Appendix A. Graf et al. (2017) note that excluding unpronounced nodes from memory usage calculation can improve the performance of tenure-based metrics. In our case, as can be seen in (17) and (18), the nodes with large tenure are unpronounced nodes V s and vs. Once these nodes are excluded for memory usage calculation, HNPS are predicted to be easier to process for the PP movement and remnant movement analyses, as indicated by the relative small tenures on those shaded tenured nodes. Moreover, ranked complexity metrics that are successful in predicting processing bias for other syntactic structures also make correct predictions for HNPS when a rightward movement structure is assumed. Ranked metrics are metrics of the form < M 1, M 2 >, which compares structures according to metric M1. When M1 is a tie, M2 is used. Graf et al. (2017) , Zhang (2017) note that the ranked metrics < MaxT, SumS > and < MaxT, MaxS R > were able to make correct processing predictions for relative clauses across several languages. These two metrics were also successful in predicting sentence processing bias across conditions in the current study, when assuming rightward movement analysis.", |
| "cite_spans": [ |
| { |
| "start": 393, |
| "end": 411, |
| "text": "Graf et al. (2017)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 1208, |
| "end": 1226, |
| "text": "Graf et al. (2017)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 1229, |
| "end": 1241, |
| "text": "Zhang (2017)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The results of the current study first provide evidence for a memory usage-based view of HNPS as discussed in incremental language production model (Stallings and MacDonald 2011) . On the one hand, memory usage by the parser reliably predicts processing advantage of HNPS structures. On the other hand, the relation between DP-PP length conditions and their processing difficulties follow directly from the syntactic structures that the MG parser is building.", |
| "cite_spans": [ |
| { |
| "start": 148, |
| "end": 178, |
| "text": "(Stallings and MacDonald 2011)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and conclusion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Furthermore, the MG parsing model provides a fresh perspective of viewing the three competing analyses. Given the processing predictions, complexity metrics favor rightward movement analysis over the rest. This is because, from the parser's perspective, assuming rightward movement delays building a large phrase, which decreases tenure on its sister node. Assuming the other two analyses does not have this effect.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and conclusion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "To conclude, this paper studies HNPS from a MG parsing perspective. Memory usage-based metrics suggest that HNPS are easier to parse. For correct processing predictions, rightward movement is favored by the parser while PP movement and remnant movement analyses requires filtering unpronounced nodes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and conclusion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the author", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the author", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the author", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "The minimalist program", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Chomsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chomsky, N. (2014). The minimalist program. MIT press.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Locality and the complexity of minimalist derivation tree languages", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Graf", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Formal Grammar", |
| "volume": "", |
| "issue": "", |
| "pages": "208--227", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Graf, T. (2012). Locality and the complexity of minimalist derivation tree languages. In Formal Grammar, pages 208-227. Springer.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "A refined notion of memory usage for minimalist parsing", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Graf", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Fodor", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Monette", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Rachiele", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Warren", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 14th Meeting on the Mathematics of Language", |
| "volume": "", |
| "issue": "", |
| "pages": "1--14", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Graf, T., Fodor, B., Monette, J., Rachiele, G., War- ren, A., and Zhang, C. (2015). A refined notion of memory usage for minimalist parsing. In Pro- ceedings of the 14th Meeting on the Mathematics of Language (MoL 2015), pages 1-14.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Relative clauses as a benchmark for Minimalist parsing", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Graf", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Monette", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Journal of Language Modelling", |
| "volume": "5", |
| "issue": "", |
| "pages": "57--106", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Graf, T., Monette, J., and Zhang, C. (2017). Rela- tive clauses as a benchmark for Minimalist pars- ing. Journal of Language Modelling, 5:57-106.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The antisymmetry of syntax. Number 25", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [ |
| "S" |
| ], |
| "last": "Kayne", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kayne, R. S. (1994). The antisymmetry of syntax. Number 25. mit Press.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "On shell structure", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [ |
| "K" |
| ], |
| "last": "Larson", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Larson, R. K. (2014). On shell structure. Routledge.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Deriving dependent right adjuncts in english. Rightward movement", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Rochemont", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [ |
| "W" |
| ], |
| "last": "Culicover", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "279--300", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rochemont, M. and Culicover, P. W. (1997). De- riving dependent right adjuncts in english. Right- ward movement, pages 279-300.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Infinite syntax", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "R" |
| ], |
| "last": "Ross", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ross, J. R. (1986). Infinite syntax. Ablex Publishing Corporation.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Derivational minimalism", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Stabler", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stabler, E. (1996). Derivational minimalism.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "International Conference on Logical Aspects of Computational Linguistics", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "68--95", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "In International Conference on Logical As- pects of Computational Linguistics, pages 68-95. Springer.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Two models of minimalist, incremental syntactic analysis", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [ |
| "P" |
| ], |
| "last": "Stabler", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Topics in cognitive science", |
| "volume": "5", |
| "issue": "3", |
| "pages": "611--633", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stabler, E. P. (2013). Two models of minimalist, incremental syntactic analysis. Topics in cognitive science, 5(3):611-633.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "It's not just the \"heavy np\": relative phrase length modulates the production of heavy-np shift", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [ |
| "M" |
| ], |
| "last": "Stallings", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "C" |
| ], |
| "last": "Macdonald", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of psycholinguistic research", |
| "volume": "40", |
| "issue": "3", |
| "pages": "177--187", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stallings, L. M. and MacDonald, M. C. (2011). It's not just the \"heavy np\": relative phrase length modulates the production of heavy-np shift. Jour- nal of psycholinguistic research, 40(3):177-187.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Phrasal ordering constraints in sentence production: Phrase length and verb disposition in heavy-np shift", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [ |
| "M" |
| ], |
| "last": "Stallings", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "C" |
| ], |
| "last": "Macdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [ |
| "G" |
| ], |
| "last": "Seaghdha", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Journal of Memory and Language", |
| "volume": "39", |
| "issue": "3", |
| "pages": "392--417", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stallings, L. M., MacDonald, M. C., and O'Seaghdha, P. G. (1998). Phrasal ordering constraints in sentence production: Phrase length and verb disposition in heavy-np shift. Journal of Memory and Language, 39(3):392-417.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Stacked Relatives: Their Structure, Processing and Computation", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhang, C. (2017). Stacked Relatives: Their Struc- ture, Processing and Computation. PhD thesis, State University of New York at Stony Brook.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Pacific Asia Conference on Language, Information and Computation Hong Kong", |
| "authors": [], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the author", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "(1) a. Max put [ PP in his car] [ NP all the boxes of home furnishings]. (Larson 2014) b. Max put [ NP all the boxes of home furnishings] [ PP in his car]. c. ??Max put [ PP in his car] [ NP boxes].", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "text": "put t in his car <put> all ... furnishings. ... furnishings] <put> in his car", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| } |
| } |
| } |
| } |