ACL-OCL / Base_JSON /prefixK /json /K16 /K16-1010.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K16-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:11:41.194056Z"
},
"title": "Incremental Prediction of Sentence-final Verbs: Humans versus Machines",
"authors": [
{
"first": "Alvin",
"middle": [
"C"
],
"last": "Grissom",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Colorado Boulder",
"location": {}
},
"email": "alvin.grissom@colorado.edu"
},
{
"first": "Naho",
"middle": [],
"last": "Orita",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tohoku University",
"location": {}
},
"email": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Colorado Boulder",
"location": {}
},
"email": "jordan.boyd.graber@colorado.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Verb prediction is important in human sentence processing and, practically, in simultaneous machine translation. In verb-final languages, speakers select the final verb before it is uttered, and listeners predict it before it is uttered. Simultaneous interpreters must do the same to translate in real-time. Motivated by the problem of SOV-SVO simultaneous machine translation, we provide a study of incremental verb prediction in verb-final languages. As a basis of comparison, we examine incremental verb prediction with human participants in a multiple choice setting using crowdsourcing to gain insight into incremental human performance in a constrained setting. We then examine a computational approach to incremental verb prediction using discriminative classification with shallow features. Both humans and machines predict verbs more accurately as more of a sentence becomes available, and case markers-when available-help humans and sometimes machines predict final verbs.",
"pdf_parse": {
"paper_id": "K16-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "Verb prediction is important in human sentence processing and, practically, in simultaneous machine translation. In verb-final languages, speakers select the final verb before it is uttered, and listeners predict it before it is uttered. Simultaneous interpreters must do the same to translate in real-time. Motivated by the problem of SOV-SVO simultaneous machine translation, we provide a study of incremental verb prediction in verb-final languages. As a basis of comparison, we examine incremental verb prediction with human participants in a multiple choice setting using crowdsourcing to gain insight into incremental human performance in a constrained setting. We then examine a computational approach to incremental verb prediction using discriminative classification with shallow features. Both humans and machines predict verbs more accurately as more of a sentence becomes available, and case markers-when available-help humans and sometimes machines predict final verbs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Humans predict future linguistic input before it is observed (Kutas et al., 2011) . This predictability has been formalized in information theory (Shannon, 1948 )-the more predictable a word is, the lower the entropy-and has explained various linguistic phenomena, such as garden path ambiguity (Den and Inoue, 1997; Hale, 2001) . Such instances of linguistic prediction are fundamental to statistical NLP. Auto-complete from search engines has made next-word prediction one of best known NLP applications.",
"cite_spans": [
{
"start": 61,
"end": 81,
"text": "(Kutas et al., 2011)",
"ref_id": "BIBREF12"
},
{
"start": 146,
"end": 160,
"text": "(Shannon, 1948",
"ref_id": "BIBREF21"
},
{
"start": 295,
"end": 316,
"text": "(Den and Inoue, 1997;",
"ref_id": "BIBREF2"
},
{
"start": 317,
"end": 328,
"text": "Hale, 2001)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Importance of Verb Prediction",
"sec_num": "1"
},
{
"text": "Long-distance word prediction, such as verb prediction in SOV languages (Levy and Keller, 2013; Momma et al., 2015; Chow et al., 2015) , is important in simultaneous machine translation from subject-object-verb (SOV) languages to subjectverb-object (SVO) languages. In SVO languages such as English, for example, the main verb phrase usually comes after the first noun phrase-the main subject-in a sentence, while in verb-final languages such as Japanese or German, it comes very last. Human simultaneous translators must make predictions about the unspoken final verb to incrementally translate the sentence. Minimizing interpretation delay thus requires making constant predictions and deciding when to trust those predictions and commit to translating in real-time.",
"cite_spans": [
{
"start": 72,
"end": 95,
"text": "(Levy and Keller, 2013;",
"ref_id": "BIBREF14"
},
{
"start": 96,
"end": 115,
"text": "Momma et al., 2015;",
"ref_id": "BIBREF17"
},
{
"start": 116,
"end": 134,
"text": "Chow et al., 2015)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Importance of Verb Prediction",
"sec_num": "1"
},
{
"text": "Such prediction can also aid machines. Matsubara et al. (2000) use pattern-matching rules; Grissom II et al. (2014) use a statistical n-gram approach; and Oda et al. (2015) extend the idea of using prediction by predicting entire syntactic constituents for English-Japanese translation. These systems require fast, accurate verb prediction to further improve simultaneous translation systems. We focus on verb prediction in verb-final languages such as Japanese with this motivation in mind.",
"cite_spans": [
{
"start": 39,
"end": 62,
"text": "Matsubara et al. (2000)",
"ref_id": "BIBREF15"
},
{
"start": 91,
"end": 115,
"text": "Grissom II et al. (2014)",
"ref_id": "BIBREF4"
},
{
"start": 155,
"end": 172,
"text": "Oda et al. (2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Importance of Verb Prediction",
"sec_num": "1"
},
{
"text": "In Section 2, we present what is, to our knowledge, the first study of humans' ability to incrementally predict the verbs in Japanese. We use these human data as a yardstick to which to compare computational incremental verb prediction. Incorporating some of the key insights from our human study into a discriminative model-namely, the importance of case markers-Section 3 presents a better incremental verb classifier than existing verb prediction schemes. Having established both human and computer performance on this challenging and interesting task, Section 4 reviews our work's relationship to other studies in NLP and linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Importance of Verb Prediction",
"sec_num": "1"
},
{
"text": "We first examine human verb selection in a constrained setting to better understand what performance we should demand of computational approaches. While we know that humans make incremental predictions across sentences, we do not know how skilled they are in doing so. While it's possible that machines-with unbounded memory and access to Internet-sized data-could do better than humans, this study allows us to appropriately gauge our expectations for computational systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Verb Prediction",
"sec_num": "2"
},
{
"text": "We use crowdsourcing to measure how well novice humans can predict the final verb phrase of incomplete Japanese sentences in a multiple choice setting. We use Japanese text of the Kyoto Free Translation Task corpus (Neubig, 2011, KFT) , a collection of Wikipedia articles in English and Japanese, representing standard, grammatical text and readily usable for future SOV-SVO machine translation experiments.",
"cite_spans": [
{
"start": 215,
"end": 234,
"text": "(Neubig, 2011, KFT)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human Verb Prediction",
"sec_num": "2"
},
{
"text": "This section describes the data sources, preparation, and methodology for crowdsourced verb prediction. Given an incomplete sentence, participants select a sentence-final verb phrase containing a verb from a list of four choices to complete the sentence, one of which is the original completion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Verbs and Sentences",
"sec_num": "2.1"
},
{
"text": "We randomly select 200 sentences from the development set of the KFT corpus (Neubig, 2011) . We use these data because the sentences are from Wikipedia articles and thus represent widely-read, grammatical sentences. These data are directly comparable to our computational experiments and readily usable for future SOV-SVO machine translation experiments.",
"cite_spans": [
{
"start": 76,
"end": 90,
"text": "(Neubig, 2011)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Verbs and Sentences",
"sec_num": "2.1"
},
{
"text": "We ask participants to predict a \"verb chunk\" that would be natural for humans. More technically, this is a sentence-final bunsetsu. 1 We identify verb bunsetsu with a dependency parser (Kurohashi and Nagao, 1994) . Of interest are bunsetsu at the end of a sentence that contain a verb. We also use bunsetsu for segmenting the incomplete sentences we show to humans, only segmenting between bunsetsu to ensure each segment is a meaningful unit.",
"cite_spans": [
{
"start": 133,
"end": 134,
"text": "1",
"ref_id": null
},
{
"start": 186,
"end": 213,
"text": "(Kurohashi and Nagao, 1994)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Verbs and Sentences",
"sec_num": "2.1"
},
{
"text": "1 A bunsetsu is a commonly used linguistic unit in Japanese, roughly equivalent to an English phrase: a collection of content words and zero or more functional words. Japanese verb bunsetsu often encompass complex conjugation. For example, a verb phrase \u8aad\u307f\u305f\u304f\u306a\u304b\u3063\u305f (read-DESI-NEG-PAST), meaning 'didn't want to read', has multiple tokens capturing tense, negation, etc. necessary for translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Verbs and Sentences",
"sec_num": "2.1"
},
{
"text": "Answer Choice Selection We display the correct verb bunsetsu and three incorrect bunsetsu completions as choices that occur in the data with frequency close to the correct answer in the overall corpus. We manually inspect the incorrect answers to ensure that these choices are semantically distant, i.e., excluding synonyms or troponyms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Verbs and Sentences",
"sec_num": "2.1"
},
{
"text": "We create two test sets of truncated sentences from the KFT corpus: The first, the full context set, includes all but the final bunsetsu-i.e., the verb phrase-to guess. The second set, the random length set, contains the same sentences truncated at predetermined, random bunsetsu boundaries. The average sentence length is nine bunsetsu, with a maximum of fourteen and minimum of three. We display sentences in the original Japanese script.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Presentation",
"sec_num": null
},
{
"text": "Participants view the task as a game of guessing the final verb. Each fragment has four concurrently displayed completion options, as in the prompt (2) and answers (3). Users receive no feedback from the interface.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Presentation",
"sec_num": null
},
{
"text": "We use CrowdFlower 2 to collect participants' answers, at a total cost of approximately USD$300. From an initial pool of fifty-six participants, we remove twenty via a Japanese fluency screening. We verify the efficacy of this test with non-native but highly proficient Japanese learners; none passed. We collect five judgments per sentence from each participant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Presentation",
"sec_num": null
},
{
"text": "(2) \u8c37\u5d0e\u6f64\u4e00\u90ce\u306f Junichiro Tanizaki-TOP \u6570\u5bc4\u5c4b\u3092 tea-ceremony house-OBJ random length set, shows how the amount of revealed data affects the predictability of the final verb chunk. We examine a correlation between the length of the pre-verb sentence fragment and participants' accuracy ( Figure 1) .",
"cite_spans": [],
"ref_spans": [
{
"start": 278,
"end": 287,
"text": "Figure 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Sentence Presentation",
"sec_num": null
},
{
"text": "Psycholinguistic experiments using lexical decision tasks suggest Japanese speakers start syntactic processing by using case-the type and number of case-marked arguments-before the verb's availability (Yamashita, 2000) . We also examine the correlation between the number of case markers 3 and accuracy. It is likely that the number of case markers and the length of the sentence fragment are confounded; so, we create a measure, the proportion of case markers to the overall sentence information (the number of case markers in the fragment divided by the number of bunsetsu chunks). We call this case density.",
"cite_spans": [
{
"start": 201,
"end": 218,
"text": "(Yamashita, 2000)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Presentation",
"sec_num": null
},
{
"text": "In the full context set, average accuracy over 200 sentences is 81.1%, significantly better than chance (p < 2.2 \u2022 10 \u221216 ). Figure 1 shows the accuracy per sentence length as defined by the bunsetsu unit. A one-way ANOVA reveals a significant effect of the sentence length (F (1, 998) = 7.512, p < 0.00624), but not the case density (F (1, 998) = 1.2, p = 0.274).",
"cite_spans": [],
"ref_spans": [
{
"start": 125,
"end": 133,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results of Human Experiments",
"sec_num": "2.3"
},
{
"text": "In the random length set, average accuracy over 200 sentences is 54.2%, significantly better than chance (t(199) = 11.8205, p < 2.2 \u2022 10 \u221216 ). of the presented sentence fragment. A one-way ANOVA reveals a significant effect of the sentence length (F (1, 998) = 57.44, p < 7.94 \u2022 10 \u221214 ). We also find a significant effect of the case density (F (1, 998) = 5.884, p = 0.0155).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Human Experiments",
"sec_num": "2.3"
},
{
"text": "Predictability increases with the percent of the sentence available in all of our experiments. By the end of the sentence, the verb chunks are highly predictable by humans in the multiple choice setting. Participants choose the final verb more accurately as they gain access to more case markers in the random length set but not in the full context set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "2.4"
},
{
"text": "Case density is a significant factor in predictive accuracy on the random length set for humans, suggesting that case is more helpful in predicting a sentence-final verb when the preceding contextual information is insufficient. The following example illustrates how case helps in prediction. The nominative and accusative markers greatly narrow the choices, as shown in (4). 4 Our results further support the proposition case markers modulate predictability in SOV verb-final processing.",
"cite_spans": [
{
"start": 376,
"end": 377,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "2.4"
},
{
"text": "(4) \u6c5f\u6238\u5e55\u5e9c\u533a-\u304c \u304c \u304c Edo shogunate-NOM \u6210\u7acb\u3059\u308b\u3068 establish-do-CONJ \u5bfa\u9662\u6cd5\u5ea6-\u306b \u306b \u306b-\u3088\u308a temple-prohibition-etc.-ACC-for - Verb P(verb)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "2.4"
},
{
"text": "Reuters Kyoto Figure 3 : Distribution of the top 100 content verbs in the Kyoto corpus and the Reuters Japanese news corpus. Both are Zipfian, but the Reuters corpus is even more skewed, even with the common special cases excluded.",
"cite_spans": [],
"ref_spans": [
{
"start": 14,
"end": 22,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "2.4"
},
{
"text": "'After Edo shogunate has established, due to the temple prohibition etc. -'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "2.4"
},
{
"text": "In other cases, there exist choices, which, while incorrect, could naturally complete the sentence. These questions are frequently missed. For instance, in one 90% revealed sentence, the participant has the choices: (i) \u53ce\u3081 \u308b (put-PRES), (ii) \u53b3\u3057\u304f\u306a\u308b (strict-become), (iii) \u53ce\u9332\u3055\u308c\u3066\u3044 \u308b (record-do.PASS-AUX.PRES), and (iv) \u52d9\u3081\u308b (work-PRES). Choice (i) is the correct answer, but choice (iii) is a reasonable choice for a Japanese speaker. All participants missed this question, and all chose the same wrong answer (iii). We leave a cloze task where participants can freely fill in the sentence-final to future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "2.4"
},
{
"text": "These results provide a basis of comparison for automatic prediction. In the next section, we examine whether computational models can predict final verbs and compare the models' performance to that of humans.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "2.4"
},
{
"text": "Now that we have the results of the previous section, we have baselines against which we can compare computational verb prediction approaches. In this section, we introduce incremental verb classification with a linear classifier. 5 For our investigation of computational verb classification, we use two very different languages that both have verb-final syntax-Japanese, which is agglutinative, and German, which is not-and show that discriminative classifiers can predict final verbs with increasing accuracy as more context of sentences is revealed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Verb Prediction",
"sec_num": "3"
},
{
"text": "A simple verb prediction scheme applied to German (Grissom II et al., 2014) achieves poor accuracy. Their approach creates a Kneser-Ney n-gram language model for the prior context associated with each verb in the corpus; i.e., 50 n-gram models for 50 verbs. Given pre-verb n-gram context c in a sentence S t , and verb prediction v (t) \u2208 V , the verb selection is defined by the following equation:",
"cite_spans": [
{
"start": 50,
"end": 75,
"text": "(Grissom II et al., 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Verb Prediction",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v (t) \u2261 arg max v c\u2208St p(c | v)p(v).",
"eq_num": "(1)"
}
],
"section": "Machine Verb Prediction",
"sec_num": "3"
},
{
"text": "It chooses the verb that maximizes the probability of the observed context, scaled by the prior probability of the verb in the overall corpus. Unsurprisingly, given the distribution of verbs in real data (Figure 3 ), this n-gram-based approach has low accuracy and tends to predict the most common verb. For a translation system, this often degenerates into the less interesting problem of whether to trust whether the final verb is indeed a common one. While this improves translation delay, better predictions will lead to more significant improvements. We instead opt for a one-vs-all discriminative classification approach. 6",
"cite_spans": [],
"ref_spans": [
{
"start": 204,
"end": 213,
"text": "(Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Machine Verb Prediction",
"sec_num": "3"
},
{
"text": "We first incrementally classify verbs on the same 200 sentences from Section 2. Since the answer choices are often complex verb bunsetsu and since many of these verb phrase answer choices do not appear among the most common verbs, lemmatizing the verbs and performing one-vs-all classification yields extremely low accuracy. Thus, we use binary classification with a single linear classifier to produce a probability for each candidate answer, encoding the verb phrase itself into the feature vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification on Human Data",
"sec_num": "3.1"
},
{
"text": "The processing is as follows: We train on 463,716 verb-final sentences extracted from the training data. We use both context features and final verb features. Our context features, i.e., those preceding the final verb, are represented as follows: the context unigrams and bigrams take a value of 1 Despite many out-ofvocabulary items and significant noise, the average accuracy, shown in the non-monotonic line in the plot, increases over the course of the sentence. Larger, darker circles indicate more examples for a given position. Accuracy was calculated by aggregating the guesses at 5% intervals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training a Morphological Model",
"sec_num": "3.1.1"
},
{
"text": "if they are present and 0 otherwise; case markers observed in the sentence context are represented as unigrams and bigrams in the order that they appear; and we reserve a distinct feature for the last observed case marker in the sentence. Our verb features consist of the final verb's tokens given by the morphological analyzer, which, in addition to the verb stem itself, typically include tense and aspect information. These are represented as unigrams and bigrams in the feature vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training a Morphological Model",
"sec_num": "3.1.1"
},
{
"text": "To allow the classifier to learn, we must encode the interactions between the verb features and the context features. Thus, we use the Cartesian product of sentence and verb features to encode interactions between them: for each training sentence we generate both a positive and a negative example. The example with the correct verb phrase is labeled as a positive example (+1), and we uniformly select a random verb phrase from one of the 500 most common verb phrases and label it as negative (\u22121) example for the same sentence context, 7 yielding 927,432 training examples and 267,037,571 features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training a Morphological Model",
"sec_num": "3.1.1"
},
{
"text": "For clarity, we describe this feature representation more formally. Given sentence S t with a pre- 7 We experimented with several numbers of weighted negative examples and found that one negative example with of equal weight to the positive gave the best results of the configurations we tried. verb context consisting of unigrams, bigrams, and case marker tokens, C = {c 0 , ..., c n }, and bunsetsu verb phrase tokens A = {a 0 , ..., a k }, the feature vector consists of C\u00d7A = {c 0 \u2227a 0 , c 0 \u2227a 1 , ..., c n \u2227 a k }, where \u2227 concatenates the two context and answer strings. During learning, the weights learned for the concatenated tokens are thus based on the relationship between a context token and a bunsetsu token and mapped to {+1, \u22121}. More concretely, individual morphemes of the Japanese verb phrase are combined with the pre-verb unigrams, bigrams, and uniquely identified case marker tokens. Accuracy improves when the morphemes used in the negative examples and positive examples are disjoint; so, we enforce this constraint when selecting negative examples. For example, if the positive example includes the past tense morpheme, \u305f, the negative example is altogether disallowed from having this morpheme as a verb feature.",
"cite_spans": [
{
"start": 99,
"end": 100,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training a Morphological Model",
"sec_num": "3.1.1"
},
{
"text": "At test time, we test progressively longer fragments of each sentence, extracting the aforementioned features online until the entire pre-verb context is available. For every sentence fragment, the classifier determines the probability of each of the four possible verbs by adding their verb features to the feature vector of the example. The answer choice with the highest probability of +1 (or the lowest probability of \u22121) is chosen as the answer. By taking this approach, we can model complex verbs and their context jointly. Intuitively, the probability of a (+1) is the model's prediction of how well the bunsetsu verb phrase fits with the sentence context (represented by the feature vector).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Choosing an Answer",
"sec_num": "3.1.2"
},
{
"text": "Some verbs are absent from the training data, forcing the classifier to rely on morphemes to distinguish between them. The alternative-e.g., in a typical one-vs-all classification approach-is that the classifier could reason from nothing whatsoever when a fully-inflected verb is absent from the training data. Given the complexity of bunsetsu, this happens often even in large corpora for a language such as Japanese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Choosing an Answer",
"sec_num": "3.1.2"
},
{
"text": "Despite only choosing among four choices, this task is in many ways more difficult than the 50label classification problem described in the next section because of the added complexity inherent modeling the effect of morphemes and missing examples. These limitations notwithstanding, the accuracy does improve as more of the sentence is revealed (Figure 4) , indicating that the algorithm learns to use these features to rank verbs, though the performance significantly lags that of both the human participants and our later experiments. Additionally, on the full context set, sentence length is negatively correlated with accuracy ( Figure 5) , as in the much more convincing results of our human experiments (Figure 1 ), though the trend is not entirely consistent, making it difficult to draw firm conclusions. Case density is again positively correlated with accuracy on both the random ( Figure 6 ) and full context sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 346,
"end": 356,
"text": "(Figure 4)",
"ref_id": "FIGREF2"
},
{
"start": 634,
"end": 643,
"text": "Figure 5)",
"ref_id": "FIGREF3"
},
{
"start": 710,
"end": 719,
"text": "(Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 893,
"end": 901,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multiple Choice Results",
"sec_num": "3.1.3"
},
{
"text": "An Illustrative Example To gain some insight into how features can influence the classifier, we here examine an example of the classifier's behavior on the multiple choice data. Figure 6 : Classification accuracy as a function of case density on the incremental sentences. The accuracy is correlated with case density, but the data are extremely noisy. Full-context accuracy has a similar trend (not shown).",
"cite_spans": [],
"ref_spans": [
{
"start": 178,
"end": 186,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multiple Choice Results",
"sec_num": "3.1.3"
},
{
"text": "(c) \u52a0\u3048-\u3089\u308c-\u3066\u3044-\u308b add-PASS-CONT-NPST",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiple Choice Results",
"sec_num": "3.1.3"
},
{
"text": "(d) \u52e4\u3081-\u308b serve-NPST",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiple Choice Results",
"sec_num": "3.1.3"
},
{
"text": "In Example (5), the classifier incorrectly chooses \"issue\" as the verb until observing the accusative case marker attached to \"Confucianism\". At this point, the classifier's confidence in the correct answer rises to 0.74-and correctly chooses \"strive\". This answer goes unchanged for the remainder of the sentence, though \"study\" attaches to \"Confucianism\", not the final verb. The combined evidence, however, is enough for the classifier to select correctly, and indeed, most of the following tokens only increase the classifier's confidence. Adding \"subsequently\" increases confidence to 0.84, an intuitive increase given the likely tense information contained in such a word. The somewhat redundant case marker here only increases confidence to 0.86. Adding the reference to the temple decreases confidence again to 0.79. But adding the final case marker, which also forms a new bigram with the previous word, results in a huge increase in confidence, to 0.90.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiple Choice Results",
"sec_num": "3.1.3"
},
{
"text": "While the multiple choice experiment was more open-ended (predicting random verbs), we now focus on a more constrained task: how well can we predict the most frequent verbs. This is the central conceit of Grissom II et al. (2014) : if you can do a good job of this, you can improve simultaneous translation. They show a slight improvement in simultaneous translation by using n-gram language model-based verb prediction. We show a large improvement over their approach to verb prediction using a discriminative multiclass logistic classifier (Langford et al., 2007) .",
"cite_spans": [
{
"start": 205,
"end": 229,
"text": "Grissom II et al. (2014)",
"ref_id": "BIBREF4"
},
{
"start": 542,
"end": 565,
"text": "(Langford et al., 2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multiclass Verb Prediction",
"sec_num": "3.2"
},
{
"text": "Data Preparation Our classes for multiclass classification are the fifty most common verbs in the KFT (Japanese, as in the human study) and Wortschatz corpora (Biemann et al., 2007, German) . We use data from the training and test sets of the KFT Japanese corpus of Wikipedia articles and a random split of the German Wortschatz web corpus, from which we extract the verb-final sentences. Grissom II et al. (2014) use an n-gram model to distinguish between the fifty most common German verbs for SOV-SVO simultaneous machine translation, which we replicate as our baseline. Following this study, we train a model on the fifty most common verbs in the training set.",
"cite_spans": [
{
"start": 159,
"end": 189,
"text": "(Biemann et al., 2007, German)",
"ref_id": null
},
{
"start": 389,
"end": 413,
"text": "Grissom II et al. (2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multiclass Verb Prediction",
"sec_num": "3.2"
},
{
"text": "In Japanese, due to the small size of the standard test set, we split the data randomly, training on 60,926 verb-final sentences ending in the top fifty verbs and testing on 1,932. Our total feature count is 4,649,055. We use the MeCab (Kudo, 2005) morphological analyzer for segmentation and verb identification. We consider only verb-final sentences. We skip semantically vacuous post-verbal copulas when identifying final verbs.",
"cite_spans": [
{
"start": 236,
"end": 248,
"text": "(Kudo, 2005)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multiclass Verb Prediction",
"sec_num": "3.2"
},
{
"text": "We identify verbs in the German text with a part-of-speech tagger (Toutanova et al., 2003) and select from the top fifty verbs. We consider the sentence-ending set of verbs to be the final verbs. We train on 76,209 verb-final sentences ending in the top fifty verbs and test on 9,386. In German, to approximate the case information that we extract in Japanese, we test the inclusion of equivalent unigram and bigram features for German articles, the surface forms of which determine the case of the next noun phrase.",
"cite_spans": [
{
"start": 66,
"end": 90,
"text": "(Toutanova et al., 2003)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Finding Verbs",
"sec_num": null
},
{
"text": "In Japanese, we omit some special cases of light verbs that combine with other verbs, as well as ambiguous surface forms and copulas. 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding Verbs",
"sec_num": null
},
{
"text": "Features All features are encoded as binary features indicating their presence or absence. For Japanese, we again include case unigrams, and case bigrams, which encode as distinct features the for case markers observed thus far. 9 We also include a feature for the last observed case marker. For both Japanese and German, we normalize the verbs to the non-past, plain form, both providing more training data for each verb and simplifying the job of our classifier.",
"cite_spans": [
{
"start": 229,
"end": 230,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Finding Verbs",
"sec_num": null
},
{
"text": "German case is conveyed primarily through articles and pronouns, so we include special features for articles. For example, for the sentence \"Es wurde ihnen von einem alten Freund geholfen\", we add the features A R T es ihnen and A R T ihnen einem to convey case information beyond individual words and bigrams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding Verbs",
"sec_num": null
},
{
"text": "Individual tokens are also used as binary features, as well as token bigrams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding Verbs",
"sec_num": null
},
{
"text": "An Example for Every Word In a simultaneous interpretation, a person or algorithm receives a constant stream of words, and each new word provides new information that can aid in prediction. Previous predictive approaches to simultaneous machine interpretation have taken this approach, and we also use it here: as each new word is observed, we make a prediction. This is a generalization of random presentation of prefixes in the human study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finding Verbs",
"sec_num": null
},
{
"text": "Better at the End A discriminative classifier does better than an n-gram classifier, which has a tendency to over-predict frequent verbs. By the end of the sentence, accuracy reaches 39.9% for German ( Figure 7 ) and 29.9% Japanese (Figure 8) , greatly exceeding choosing the most frequent class baseline of 3.7% (German) and 6.05% (Japanese). The n-gram language model also outperforms this baseline, but not by much. It also improves over the course of the sentence, but the model cannot reliably predict more than a handful of verbs in either language. copula de aru. Distinguishing between all of these cases is beyond the scope of this study; so, they are excluded. We also omit duplicates that are spelled differently (i.e., the same word but spelled without Chinese (kanji) characters and slightly different forms of the same root).",
"cite_spans": [],
"ref_spans": [
{
"start": 202,
"end": 210,
"text": "Figure 7",
"ref_id": "FIGREF5"
},
{
"start": 232,
"end": 242,
"text": "(Figure 8)",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Classification Results and Discussion",
"sec_num": "3.3"
},
{
"text": "We also omit the light verb naru (\"to become\" or \"to make up\") for similar reasons to suru. The increasing trend shown in the results does not change with their inclusion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Results and Discussion",
"sec_num": "3.3"
},
{
"text": "9 For instance, given a sentence fragment X-\u306b Y-\u3092, representing X-DAT Y-ACC, the case bigram would be \u306b\u2227\u3092. Richer Features Help (Mostly at the End) Bigram features help both languages, but Japanese more than German; beyond bigrams, however, trigrams and longer features overfit the training data and hurt performance. The better performance for Japanese bigrams is likely because word boundaries are not well-defined in Japanese, and individual morphemes can combine in ways that significantly add information. German word boundaries are more precise and words (particularly nouns) can carry substantial information themselves. Richer features matter more toward the end of the sentence. In Japanese, adding bigrams consistently outperforms unigrams alone, but in both languages, adding special features for tokens with case information helps almost as much as adding the full set of bigrams. In Japanese, case markings always immediately follow the words marked, and in German the articles precede the nouns to which they assign case; thus, rather than relying on isolated unigrams, using bigrams provides opportunities to encode case-marked words that more narrowly select for verbs. In Japanese, the differences are more pronounced toward the very end of the sentences (and less so in German).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Results and Discussion",
"sec_num": "3.3"
},
{
"text": "Richer features help more at the end, but not merely because the last words of the sentence represent the densest feature vectors. In Japanese, the last word is usually a case-marked noun phrase or adverb that matches the main predicate. The final word is therefore immune to subclause interference and must modify the final verb, boosting the classifier performance in these final positions and amplifying the predictive discrepancies between the various feature sets. Accuracy spikes at the end of Japanese sentences, where case information helps nearly as much as adding the entire set of bigrams, further supporting case information's importance. Deeper processing-e.g., separating case-marked words in subclauses from those in the main clause-would likely be more useful. Features and feature-selection strategies that we tried which did not help included the following: adding only case marker unigrams (instead of bigrams); filtering the features by using only case-marked words; only allowing one word per case marker in the feature vector (the most recent); using decaying weights on features further in the past; adding part-of-speech tag n-grams; and adding the word nearest to the centroid of the observed context in a word embedding space. While these features may have potential, they did not lead to meaningful increases in accuracy in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Results and Discussion",
"sec_num": "3.3"
},
{
"text": "While to our knowledge our work is the first indepth study of incremental verb prediction, it is not the first study of verb prediction in humans or machines. This section reviews that related work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Human Verb Prediction Prediction is easier with more context and explicit case markings. Teramura (1987) shows that next word prediction in Japanese improves as more words are incrementally revealed. While only looking at verb prediction given the complete preceding context, Yamashita (1997) finds that scrambling word order in Japanese-a case rich language that allows such scrambling-does not harm final verb prediction, but that explicit case marking helps final verb prediction. Our results show that this is true even for incremental verb prediction. Levy and Keller (2013) also find that dative markers aid German verb prediction. Neurolinguistic measurements by Friederici and Frisch (2000) suggest processing verb-final clauses in German use both semantic and syntactic information, but that they are processed differently. In Japanese, Koso et al. (2011) measure the effect of case markings on predicting verbs with strong case preferences. This is consistent with our use of case-based features and suggests that further gains are possible using richer syntactic representations. Chow et al. (2015) use N400 measurements to investigate two competing hypotheses for the initial prediction of an upcoming verb: whether predictions are dependent on all words equally (the Bag-of-words hypothesis), or alternatively, whether prediction is selectively modulated by the final verb's arguments (the Bag-of-arguments hypothesis). They argue for the latter.",
"cite_spans": [
{
"start": 89,
"end": 104,
"text": "Teramura (1987)",
"ref_id": null
},
{
"start": 846,
"end": 864,
"text": "Koso et al. (2011)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "The literature on incremental verb prediction is sparse. A key finding of Matsubara et al. (2002) is that Japanese-English simultaneous interpreters, when given access to lecture slides, would refer to them to predict the next phrase.",
"cite_spans": [
{
"start": 74,
"end": 97,
"text": "Matsubara et al. (2002)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Prediction for Simultaneous Machine Translation The Verbmobil simultaneous translation system (Kay et al., 1992) uses deleted interpolation (Jelinek, 1990) to create a weighted n-gram models to predict dialogue acts-almost identical to predicting the next word (Reithinger et al., 1996) . Konieczny and D\u00f6ring (2003) predict verbs with a recurrent neural network, but Matsubara et al. (2000) was the first to use verb predictions as part of a simultaneous interpretation system. They use pattern matching-based predictions of English verbs. In contrast, Grissom II et al. (2014) use a statistical approach, using n-gram models to predict German verbs and particles (in Section 3 we show that this model predicts verbs poorly). However, their simultaneous translation system is able to learn when to trust these predictions. Oda et al. (2015) extend the idea of using prediction by predicting entire syntactic constituents for English-Japanese simultaneous machine translation. Both systems will likely benefit from our improved verb prediction presented here.",
"cite_spans": [
{
"start": 94,
"end": 112,
"text": "(Kay et al., 1992)",
"ref_id": "BIBREF7"
},
{
"start": 140,
"end": 155,
"text": "(Jelinek, 1990)",
"ref_id": "BIBREF6"
},
{
"start": 261,
"end": 286,
"text": "(Reithinger et al., 1996)",
"ref_id": "BIBREF20"
},
{
"start": 289,
"end": 316,
"text": "Konieczny and D\u00f6ring (2003)",
"ref_id": "BIBREF8"
},
{
"start": 368,
"end": 391,
"text": "Matsubara et al. (2000)",
"ref_id": "BIBREF15"
},
{
"start": 554,
"end": 578,
"text": "Grissom II et al. (2014)",
"ref_id": "BIBREF4"
},
{
"start": 824,
"end": 841,
"text": "Oda et al. (2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Verb prediction is hard for both machines and humans but impossible for neither. Verbs become more predictable in discriminative settings as more of the sentence is revealed, and when all of the prior context is available, the verbs are highly predictable by humans when a limited number of choices is available, though even then not perfectly so. While we make no claims concerning upper or lower bounds of predictability in different settings, our dataset provides benchmarks for future verb prediction research on publicly available corpora: cognitive scientists can validate prediction, confusion, and anticipation; engineers have a human benchmark for their systems; and linguists can conduct future experiments on predictability. Shallow features can be used to predict verbs more accurately with more context. Improving verb prediction can benefit simultaneous translations systems that have already shown to benefit from verb predictions, as well as enable new applications that involve predicting future linguistic input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://www.crowdflower.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In this study, we counted case markers that mark nominative (-ga), accusative (-wo), ablative (-kara), and dative (-ni).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A recent psycholinguistics study on incremental Japanese verb-final processing(Momma et al., 2015) argues that native Japanese speakers plan verbs in advance, before the articulation of object nouns, but not subject nouns. Since case markers assign the roles of subject and object in Japanese, we expect that a high ratio of case markers to words will increase predictability of verbs. In addition,Yamashita (1997) argues that the variety of case markers increases predictability just before the verb.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "While we use logistic regression, using hinge loss achieves similar accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "One-vs-all classification builds a classifier for each class versus the aggregate all other classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In Japanese, we omit some ambiguous cases and variants of \"is\" and \"do\": excluded are variants of suru (\"to do\"), which combines with nouns to form new verbs, aru (\"is\", inanimate case), and iru (\"is\", animate case). The tokens aru and iru also combine with other verbs to change tense and aspect, in which case they are not verbs, and can form the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their comments. We thank Yusuke Miyao for his helpful support. We would also like to thank James H. Martin, Martha Palmer, Hal Daum\u00e9 III, Mans Hulden, Mohit Iyyer, John Morgan, Shota Momma, Graham Neubig, and Sho Hoshino for their invaluable discussions and input. This work was supported by NSF grant IIS-1320538. Boyd-Graber is also partially supported by NSF grants CCF-1409287 and NCSE-1422492. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Leipzig corpora collection-monolingual corpora of standard size",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Heyer",
"suffix": ""
},
{
"first": "Uwe",
"middle": [],
"last": "Quasthoff",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Richter",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of Corpus Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Biemann, Gerhard Heyer, Uwe Quasthoff, and Matthias Richter. 2007. The Leipzig corpora collection-monolingual corpora of standard size. Proceedings of Corpus Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A \"bag-of-arguments\" mechanism for initial verb predictions. Language",
"authors": [
{
"first": "Cybelle",
"middle": [],
"last": "Wing-Yee Chow",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Lau",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Phillips",
"suffix": ""
}
],
"year": 2015,
"venue": "Cognition and Neuroscience",
"volume": "",
"issue": "",
"pages": "1--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wing-Yee Chow, Cybelle Smith, Ellen Lau, and Colin Phillips. 2015. A \"bag-of-arguments\" mechanism for initial verb predictions. Language, Cognition and Neuroscience, pages 1-20.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Disambiguation with verb-predictability: Evidence from Japanese garden-path phenomena",
"authors": [
{
"first": "Yasuhara",
"middle": [],
"last": "Den",
"suffix": ""
},
{
"first": "Masakatsu",
"middle": [],
"last": "Inoue",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "179--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yasuhara Den and Masakatsu Inoue. 1997. Disam- biguation with verb-predictability: Evidence from Japanese garden-path phenomena. In Proceedings of the Cognitive Science Society, pages 179-184. Lawrence Erlbaum.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Verb argument structure processing: The role of verbspecific and argument-specific information",
"authors": [
{
"first": "D",
"middle": [],
"last": "Angela",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Friederici",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Frisch",
"suffix": ""
}
],
"year": 2000,
"venue": "Journal of Memory and Language",
"volume": "43",
"issue": "3",
"pages": "476--507",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angela D Friederici and Stefan Frisch. 2000. Verb argument structure processing: The role of verb- specific and argument-specific information. Journal of Memory and Language, 43(3):476-507.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Don't until the final verb wait: Reinforcement learning for simultaneous machine translation",
"authors": [
{
"first": "Alvin",
"middle": [
"C"
],
"last": "Grissom",
"suffix": ""
},
{
"first": "I",
"middle": [
"I"
],
"last": "",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Morgan",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alvin C. Grissom II, He He, Jordan Boyd-Graber, John Morgan, and Hal Daum\u00e9 III. 2014. Don't until the final verb wait: Reinforcement learning for simulta- neous machine translation. In Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A probabilistic earley parser as a psycholinguistic model",
"authors": [
{
"first": "John",
"middle": [],
"last": "Hale",
"suffix": ""
}
],
"year": 2001,
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Hale. 2001. A probabilistic earley parser as a psycholinguistic model. In Conference of the North American Chapter of the Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Self-organized language modeling for speech recognition",
"authors": [
{
"first": "Fred",
"middle": [],
"last": "Jelinek",
"suffix": ""
}
],
"year": 1990,
"venue": "Readings in speech recognition",
"volume": "",
"issue": "",
"pages": "450--506",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fred Jelinek. 1990. Self-organized language modeling for speech recognition. Readings in speech recogni- tion, pages 450-506.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Verbmobil: A translation system for face-to-face dialog",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Norvig",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Gawron",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Kay, Peter Norvig, and Mark Gawron. 1992. Verbmobil: A translation system for face-to-face di- alog. University of Chicago Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Anticipation of clause-final heads: Evidence from eyetracking and srns",
"authors": [
{
"first": "Lars",
"middle": [],
"last": "Konieczny",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "D\u00f6ring",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of iccs/ascs",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lars Konieczny and Philipp D\u00f6ring. 2003. Antic- ipation of clause-final heads: Evidence from eye- tracking and srns. In Proceedings of iccs/ascs.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An event-related potential investigation of lexical pitch-accent processing in auditory Japanese",
"authors": [
{
"first": "Ayumi",
"middle": [],
"last": "Koso",
"suffix": ""
},
{
"first": "Shiro",
"middle": [],
"last": "Ojima",
"suffix": ""
},
{
"first": "Hiroko",
"middle": [],
"last": "Hagiwara",
"suffix": ""
}
],
"year": 2011,
"venue": "Brain research",
"volume": "1385",
"issue": "",
"pages": "217--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ayumi Koso, Shiro Ojima, and Hiroko Hagiwara. 2011. An event-related potential investigation of lexical pitch-accent processing in auditory Japanese. Brain research, 1385:217-228.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Mecab: Yet another part-of-speech and morphological analyzer",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo. 2005. Mecab: Yet another part-of-speech and morphological analyzer. http://mecab. source- forge. net/.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Kn parser: Japanese dependency/case structure analyzer",
"authors": [
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the Workshop on Sharable Natural Language Resources",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sadao Kurohashi and Makoto Nagao. 1994. Kn parser: Japanese dependency/case structure analyzer. In Proceedings of the Workshop on Sharable Natural Language Resources.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A look around at what lies ahead: prediction and predictability in language processing. Predictions in the brain: Using our past to generate a future",
"authors": [
{
"first": "Marta",
"middle": [],
"last": "Kutas",
"suffix": ""
},
{
"first": "Katherine",
"middle": [
"A"
],
"last": "Delong",
"suffix": ""
},
{
"first": "Nathaniel",
"middle": [
"J"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "190--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta Kutas, Katherine A DeLong, and Nathaniel J Smith. 2011. A look around at what lies ahead: prediction and predictability in language processing. Predictions in the brain: Using our past to generate a future, pages 190-207.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Vowpal wabbit online learning project",
"authors": [
{
"first": "John",
"middle": [],
"last": "Langford",
"suffix": ""
},
{
"first": "Lihong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Strehl",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Langford, Lihong Li, and Alex Strehl. 2007. Vowpal wabbit online learning project.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Expectation and locality effects in german verb-final structures",
"authors": [
{
"first": "P",
"middle": [],
"last": "Roger",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of memory and language",
"volume": "68",
"issue": "2",
"pages": "199--222",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roger P Levy and Frank Keller. 2013. Expectation and locality effects in german verb-final structures. Journal of memory and language, 68(2):199-222.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Simultaneous Japanese-English interpretation based on early predition of English verb",
"authors": [
{
"first": "Shigeki",
"middle": [],
"last": "Matsubara",
"suffix": ""
},
{
"first": "Keiichi",
"middle": [],
"last": "Iwashima",
"suffix": ""
},
{
"first": "Nobuo",
"middle": [],
"last": "Kawaguchi",
"suffix": ""
},
{
"first": "Katsuhiko",
"middle": [],
"last": "Toyama",
"suffix": ""
},
{
"first": "Yasuyoshi",
"middle": [],
"last": "Inagaki",
"suffix": ""
}
],
"year": 2000,
"venue": "Symposium on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shigeki Matsubara, Keiichi Iwashima, Nobuo Kawaguchi, Katsuhiko Toyama, and Yasuyoshi Inagaki. 2000. Simultaneous Japanese-English in- terpretation based on early predition of English verb. In Symposium on Natural Language Processing.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Bilingual spoken monologue corpus for simultaneous machine interpretation research",
"authors": [
{
"first": "Shigeki",
"middle": [],
"last": "Matsubara",
"suffix": ""
},
{
"first": "Akira",
"middle": [],
"last": "Takagi",
"suffix": ""
},
{
"first": "Nobuo",
"middle": [],
"last": "Kawaguchi",
"suffix": ""
},
{
"first": "Yasuyoshi",
"middle": [],
"last": "Inagaki",
"suffix": ""
}
],
"year": 2002,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shigeki Matsubara, Akira Takagi, Nobuo Kawaguchi, and Yasuyoshi Inagaki. 2002. Bilingual spoken monologue corpus for simultaneous machine inter- pretation research. In LREC.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The timing of verb selection in japanese sentence production",
"authors": [
{
"first": "Shota",
"middle": [],
"last": "Momma",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Slevc",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Phillips",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of experimental psychology. Learning, memory, and cognition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shota Momma, L Robert Slevc, and Colin Phillips. 2015. The timing of verb selection in japanese sen- tence production. Journal of experimental psychol- ogy. Learning, memory, and cognition.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The Kyoto free translation task",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig. 2011. The Kyoto free translation task. Available online at http://www. phontron. com/kftt.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Syntax-based simultaneous translation through prediction of unseen syntactic constituents",
"authors": [
{
"first": "Yusuke",
"middle": [],
"last": "Oda",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Sakriani",
"middle": [],
"last": "Sakti",
"suffix": ""
},
{
"first": "Tomoki",
"middle": [],
"last": "Toda",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Nakamura",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yusuke Oda, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Syntax-based simultaneous translation through prediction of un- seen syntactic constituents. Proceedings of the As- sociation for Computational Linguistics, June.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Predicting dialogue acts for a speech-to-speech translation system",
"authors": [
{
"first": "Norbert",
"middle": [],
"last": "Reithinger",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Engel",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Kipp",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Klesen",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "2",
"issue": "",
"pages": "654--657",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Norbert Reithinger, Ralf Engel, Michael Kipp, and Martin Klesen. 1996. Predicting dialogue acts for a speech-to-speech translation system. volume 2, pages 654-657. IEEE.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A mathematical theory of communication",
"authors": [
{
"first": "Claude",
"middle": [
"Elwood"
],
"last": "Shannon",
"suffix": ""
}
],
"year": 1948,
"venue": "",
"volume": "27",
"issue": "",
"pages": "623--656",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claude Elwood Shannon. 1948. A mathematical the- ory of communication. Bell Systems Technical Jour- nal, 27:379-423, 623-656.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Feature-rich partof-speech tagging with a cyclic dependency network",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "173--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Dan Klein, Christopher D Man- ning, and Yoram Singer. 2003. Feature-rich part- of-speech tagging with a cyclic dependency network. In Conference of the North American Chapter of the Association for Computational Linguistics, pages 173-180.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The effects of word-order and case marking information on the processing of Japanese",
"authors": [
{
"first": "Hiroko",
"middle": [],
"last": "Yamashita",
"suffix": ""
}
],
"year": 1997,
"venue": "Journal of Psycholinguistic Research",
"volume": "26",
"issue": "2",
"pages": "163--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroko Yamashita. 1997. The effects of word-order and case marking information on the processing of Japanese. Journal of Psycholinguistic Research, 26(2):163-188.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Structural computation and the role of morphological markings in the processing of japanese",
"authors": [
{
"first": "Hiroko",
"middle": [],
"last": "Yamashita",
"suffix": ""
}
],
"year": 2000,
"venue": "Language and speech",
"volume": "43",
"issue": "4",
"pages": "429--455",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroko Yamashita. 2000. Structural computation and the role of morphological markings in the processing of japanese. Language and speech, 43(4):429-455.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The first task, on the full context set, shows how humans predict the sentence-final verb chunk with all context available. The second task, on the Full context set: Accuracy is generally high, but slightly decreases on longer, more complicated sentences, averaging 81.1%.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "Figure 2 shows the accuracy per percentage of length Random length set: The accuracy of human verb predictions reliably increases as more of the sentence is revealed.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"text": "Verb classification results on crowdsourced sentences.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"text": "Classification accuracy as a function of sentence length on the full context set. While there is a clear correlation between sentence length and accuracy, there are several outliers. Compare toFigure 1.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF4": {
"text": "(a) \u52b1\u3093-\u3060 strive-PAST (b) \u5275\u520a-\u3055-\u308c-\u308b issue-do-PASS-NPST",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF5": {
"text": "German average prediction accuracy over the course of sentences. Bigrams help slightly in the second half of the sentence. Adding special features for case-assigning articles to unigrams nearly matches the performance of adding all bigrams in the final 10%. All handily outperform the trigram language model.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF6": {
"text": "Japanese average prediction accuracy over the course of sentences. Adding bigrams consistently outperforms unigrams alone in Japanese, possibly due to the agglutinative nature of the language. The accuracies diverge the most toward the end of the sentences: Adding only explicit case markers to unigrams nearly matches performance of adding all bigrams toward the end. All outperform the trigram language model.",
"num": null,
"type_str": "figure",
"uris": null
}
}
}
}