id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_7600
For instance, to build the NP happy dogs from Cleveland in HPSG lexically would generate a lexical NP dogs incompatible with the constraints on modifiers like happy (which haveN MOD values) and further would prevent the added quantifier to outscope the modifiers.
a phrasal approach misses the broader generalization that these constructions are lexically triggered (by particular noun classes/inflection) and again heterogeneously spreads out language particular grammatical information between the lexicon and phrasal rules.
contrasting
train_7601
The parsing methodology investigated here has previously been applied to Swedish, where promising results were obtained with a relatively small treebank (approximately 5000 sentences for training), resulting in an attachment score of 84.7% and a labeled accuracy of 80.6% (Nivre et al., 2004).
1 since there are no comparable results available for Swedish, it is difficult to assess the significance of these findings, which is one of the reasons why we want to apply the method to a benchmark corpus such as the the Penn Treebank, even though the annotation in this corpus is not ideal for labeled dependency parsing.
contrasting
train_7602
Under this worstcase analysis many simple concept classes are unlearnable.
in many situations it is more realistic to assume that there is some relationship between the concept and the distribution, and furthermore in general only positive examples will be available.
contrasting
train_7603
We compare it to a state in the hypothesis v. If this state is a representative of the same state in the target v, then L ∞ (Ŝ v , P q ) < µ/4 (by the goodness of the multisets), the triangle inequality shows that L ∞ (Ŝ u,σ ,Ŝ v ) < µ/2, and therefore the comparison will return true.
let us suppose that v is a representative of a different state q v .
contrasting
train_7604
Indeed we can deal with the previous objection in the same way if necessary by requiring the number of states in the generating PDFA to be bounded by a polynomial in the minimal number of states needed to generate the target language.
both of these limitations are unavoidable given the negative results previously discussed.
contrasting
train_7605
For example, automated scores are capable of distinguishing improved MT performance on easier texts or degraded performance on harder texts, so the automated scores also give information on whether one collection of texts is easier or harder than the other for an MT system: the complexity of the evaluation task is directly reflected in the evaluation scores.
there may be a need to avoid such sensitivity.
contrasting
train_7606
MT evaluation is harder than evaluation of other NLP tasks, which makes it partially dependent on intuitive human judgements about text quality.
automated tools are capable of capturing and representing the "absolute" level of performance for MT systems, and this level could then be projected into task-dependent figures for harder or easier texts.
contrasting
train_7607
Human evaluation results are available for all of the texts, with the exception of the news reports translated by System-2, which was not part of the DARPA 94 evaluation.
the human evaluation scores were collected at different times under different experimental conditions using different formulations of the evaluation tasks, which leads to substantial differences between human scores across different evaluations, even if the evaluations were done at the same time.
contrasting
train_7608
A similar tendency also holds for the system after dictionary update.
technically speaking the compared systems are no longer the same, because the dictionary update was done individually for each system, so the quality of the update is an additional factor in the system's performance -in addition to the complexity of the translated texts.
contrasting
train_7609
The retrieval procedure can be efficiently implemented by the techniques of clustering (Cranias et al., 1997) or using A* search algorithm on word graphs (Doi et al., 2004).
it still takes more cost to calculate Sim than P rob when the corpus is large.
contrasting
train_7610
MUC (1998); NIS (2003); DUC (2003)).
newswire does not offer direct access to facts, events, and opinions; rather, journalists report what they have experienced, and report on the experiences of others.
contrasting
train_7611
By assigning the agents (and the corresponding dialogues) to one of two families we give ourselves the option of restricting user-led transitions between main and ancillary transactions.
the overall objective of our implementation is to maintain a high degree of flexibility in the manner in which the system reacts to unsolicited user utterances.
contrasting
train_7612
The PaymentExpert is identified as an appropriate payment handler, and is placed above AccommodationExpert on the ExpertFocusStack.
let us suppose that eliciting payment details first involves eliciting address details, and so the PaymentExpert in its turn asks the DomainSpotter to find it an agent specialising in address processing -in this case the AddressExpert.
contrasting
train_7613
For example, at the outset of an accommodation enquiry, the related service dialogue frame will not generally contain an explicitly linked payment frame.
the DomainSpotter is able to determine which agents can provide payment support, and so the system generates a number of potential discourse paths relating to payment.
contrasting
train_7614
Accounts of discourse structure vary greatly with respect to how many discourse relations they assume, ranging from two (Grosz & Sidner, 1986) to over 400 different coherence relations, reported in Hovy and Maier (1995).
hovy and Maier (1995) argue that taxonomies with more relations represent subtypes of taxonomies with fewer relations.
contrasting
train_7615
We adopted a sentence unit-based definition of discourse segments.
we also assume that contentful coordinating and subordinating conjunctions (cf.
contrasting
train_7616
Moreover, the attributes and values in a question "pair up" naturally to indicate equality constraints in SQL.
values may be paired with implicit attributes that do not appear in the question (e.g., the attribute 'cuisine' in "What are the Chinese restaurants in Seattle?"
contrasting
train_7617
PRECISE does return two interpretations a small percentage of the time.
even when restricted to returning a single interpretation, PRECISE-1 still achieved an impressive 89.1% accuracy (Table 1).
contrasting
train_7618
These models provide principled ways of including additional conditioning variables other than the preceding words, such as morphological or syntactic features.
the number of possible choices for model parameters creates a large space of models that cannot be searched exhaustively.
contrasting
train_7619
(Geutner, 1995;Ç arki et al., 2000;Kiecza et al., 1999)).
in all of these studies words were decomposed into linear sequences of morphs or morph-like units, using either linguistic knowledge or datadriven techniques.
contrasting
train_7620
For this reason, a context larger than that provided by a trigram is typically required, which quickly leads to data-sparsity.
to these approaches, factored language models encode morphological knowledge not by altering the linear segmentation of words but by encoding words as parallel bundles of features.
contrasting
train_7621
Therefore, when we write a report or an explanation on a newlydeveloped technology intended for readers including laypersons, it is very important to compose a title that will stimulate their interest in the technology.
technical specialists are not necessarily good at composing appealing titles, because it isn't clear what sort of titles will stimulate the interest of lay readers in the technology.
contrasting
train_7622
Several researches have been reported on title generation (Jin and Hauptmann, 2000) (Berger and Mittal, 2000) and readability of texts (Minel et al., 1997) (Hartley and Sydes, 1997) (Inui et al., 2003).
the researches on title gener-ation focus on generating a very compact summary of the document rather than composing an appealing title.
contrasting
train_7623
f are(A) ∧ airline(B) → on(A, B) meal(A) ∧ f light(B) → on(A, B) f light(A) ∧ day(B) → on(A, B) f light(A) ∧ airline(B) → on(A, B) It is an old observation that in order to choose the correct reading of an ambiguous sentence, we need a great deal of knowledge about the world.
the observation that disambiguation decisions depend on knowledge of the world can be made to cut both ways: just as we need a lot of knowledge of the world to make disambiguation decisions, so a given disambiguation decision can be interpreted as telling us a lot about the way we view the structure of the world.
contrasting
train_7624
Thus, the above pattern is converted to the following: It is now easier to understand the pattern as :'A person C who is elected succeeds a person E'.
it is still not straightforward how one can evaluate the usefulness of such patterns or indeed how one can incorporate the information they carry into a system for disambiguation or reasoning.
contrasting
train_7625
For the remainder of this paper, we will suppress up marking and write s up simply as s. Examples 13and 14show that prosodically marked and unmarked phrases can combine.
both of these partial derivations produce categories that cannot be combined further.
contrasting
train_7626
Using a best-first search strategy with the adjacency constraint we obtain the following alignment: None of the alignment approaches described above produces the preferred reference alignment in our example using the given clue matrix.
simple iterative procedures come very close to the reference and produce acceptable alignments even for multi-word units, which is promising for an automatic clue alignment system.
contrasting
train_7627
Gold standards can be re-used for additional test runs which is important when examining different parameter settings.
recall and precision derived from information retrieval have to be adjusted for the task of word alignment.
contrasting
train_7628
Non-contiguous elements could also be identified using the same approach simply by removing the adjacency constraint.
this seems to increase the noise significantly according to experiments not shown in this paper.
contrasting
train_7629
Using the IBM translation models IBM-1 to IBM-5 (Brown et al., 1993), as well as the Hidden-Markov alignment model (Vogel et al., 1996), we can produce alignments of good quality.
all these models constrain the alignments so that a source word can be aligned to at most one target word.
contrasting
train_7630
Traditionally, coreference resolution is done by mining the reference relationships between NP pairs.
an individual NP usually lacks adequate description information of its referred entity.
contrasting
train_7631
For example, words of similar semantic types, such as company -government, tend to come up as distributionally similar, even though they are not substitutable in a meaning preserving sense.
many semantic-oriented applications, such as Question Answering, Paraphrasing and Information Extraction, do need to recognize which words may substitute each other in a meaning preserving manner.
contrasting
train_7632
This framework seems better than SVM to select best things.
it is well known that attachment ambiguity of PP is a major problem in parsing.
contrasting
train_7633
It seems that root-node finding is relatively easy and SVM worked well.
pp attachment is more difficult and SVM's behavior was unstable whereas preference Learning was more robust.
contrasting
train_7634
To give one example, the number of categories in the tag dictionary's entry for the word is is 45 (only considering categories which have appeared at least 10 times in the training data).
in the sentence Mr. Vinken is chairman of Elsevier N.V., the Dutch publishing group., the supertagger correctly assigns 1 category to is for β = 0.1, and 3 categories for β = 0.01.
contrasting
train_7635
There are two main reasons for this: (1) formalisations of information structure often use variants of higherorder logic to characterise its semantic impact (Krifka, 1993;Kruijff-Korbayova, 1998;Steedman, 2000), which limits the use of inference in practice (Blackburn and Bos, 2003); and (2) the effect of information structure on the compositional semantics of an utterance is rarely worked out in enough detail useful for computational implementation.
exploring information structure in spoken dialogue systems is becoming realistic now because of the recent advances made in text-to-speech synthesisers and automated speech recognisers -hence there is a growing need for computational implementations of information structure in grammar formalisms.
contrasting
train_7636
The directionality of the attributes of a functor category is marked by the features pre and post on its attributes rather than by the directionality of the slashes as it is done in CCG.
to CCG, UCG only uses forward and backward application as means for combining categories.
contrasting
train_7637
Otherwise it would be impossible to tell which item actually carries the accent for larger phrases such as married Manny H* LL% , where without the above mentioned constraint we could combine married and Manny first to form the unit married Manny, and only then combine this two word unit with the pitch accent.
this is not what we want, because this way we cannot determine any more which of the two words was accented.
contrasting
train_7638
Normally, a new edu is created from the begin position of the cue phrase to the end boundary of the NP.
this procedure may create incorrect results as shown in the example below: (1) [In 1988, Kidder eked out a $46 million profit, mainly][ because of severe cost cutting.]
contrasting
train_7639
Often-cited examples of such interactive robots that have a capability of communicating in natural language are the humanoid robot ROBOVIE (Kanda et al., 2002) and robotic museum tour guides like RHINO (Burgard et al., 1999) (Deutsches Museum Bonn), its successor MINERVA touring the Smithsonian in Washington (Thrun et al., 2000), and ROBOX at the Swiss National Exhibition Expo02 (Siegwart and et al, 2003).
dialogue systems used in robotics appear to be mostly restricted to relatively simple finite-state, query/response interaction.
contrasting
train_7640
a different JSAPI-compliant speech recogniser into our system is now a matter of changing a line in a configuration file.
building talking robots is still a challenge that combines the particular problems of dialogue systems and robotics, both of which introduce situations of incomplete information.
contrasting
train_7641
The previous system performs the antecedent estimation process for all the case frames of "orosu", and incorrectly estimates the antecedent of "wo" zero pronoun as "oroshigane" (grater) ‡ .
our proposed method deals with only the similar case frames to the cached "orosu (1)".
contrasting
train_7642
The English SemCor corpus has been manually annotated.
some annotation errors can be found in the texts (see Fellbaum et al., 1998, for SemCor taggers' confidence ratings).
contrasting
train_7643
In these cases the problem arises because in principle if the target expression is not a lexical unit it cannot be annotated as a whole.
each component of the free combination of words should be annotated with its respective sense.
contrasting
train_7644
This is not a problem so long as this knowledge is to be applied locally, in face-to-face communication with patients.
as a result of recent developments in technology, including telemedicine and internet-based medical query systems, we now face a situation where such dispersed, practical (human) knowledge does not suffice.
contrasting
train_7645
One such label is medicine; others are surgery and drug.
it was left undecided on what criteria terms should be selected as domain labels and what the relations among the relevant domains should be (arguably, surgery and drug should be included in the wider domain of medicine).
contrasting
train_7646
Currently, when asked to output terms associated with medicine, the browser returns some 504 nouns, verbs, and adjectives (both single words and phrases), representing some 270 different senses.
many cognate senses with clear medical uses are currently not labeled in this way.
contrasting
train_7647
Broad coverage unification-based deep parsers, however, unavoidably have problems meeting the very high accuracy and efficiency requirements needed for real-time dialog.
parsers based on lexicalized probabilistic context free grammars such those of Collins (1999) and Charniak (1997), which we call shallow parsers 1 , are robust and efficient, but the structural representations obtained with such parsers are insufficient as input for intelligent reasoning.
contrasting
train_7648
As we drop the chart size to 1500, the speed-up drops to just 1.4, as shown in Table 5.
we have improvements in accuracy using skeletons when we parse with low upper limits.
contrasting
train_7649
MMR bases this similarity score on word overlap and additional information about the time when each document was released, and thus can fail to identify repeated information when paraphrasing is used to convey the same meaning.
to these approaches, our model handles redundancy in the output at the same time it selects the output sentences.
contrasting
train_7650
In our experiments, we set out to investigate whether NTPC's operating parameters were overly simple, and whether more complex arrangements were necessary or desirable.
empirical evidence points to the fact that, in this problem of error correction in high accuracy ranges, at least, simple mechanisms will suffice to produce good results-in fact, the more complex operations end up degrading rather than improving accuracy.
contrasting
train_7651
More closely relevant to the experiments described here in, two of the best-performing three teams in the CoNLL-2002 Named Entity Recognition shared task evaluation used boosting as their base system (Carreras et al., 2002) (Wu et al., 2002).
precedents for improving performance after boosting are few.
contrasting
train_7652
The annotators used RST Tool (O'Donnell, 1997), which worked reasonably well for the purpose.
since we also have in our group an XML-based lexicon of German connectives at our disposal (Berger et al., 2002), why not use this resource to speed up the first phase of the annotation?
contrasting
train_7653
For each sentence in the given abstract, the corresponding source sentence is determined by combing the similarity score and heuristic rules.
it is known that bag-of-words representation is not optimal for short texts like single sentences (Suzuki et al., 2003).
contrasting
train_7654
From Table 1, we can calculate the WSK value as follows: In this way, we can measure the similarity between two texts.
wSK disregards synonyms, hyponyms, and hypernyms.
contrasting
train_7655
However, we have to admit that the improvements are relatively small for single document data.
tree Kernel did not work well since it is too sensitive to slight differences.
contrasting
train_7656
The experimental results show that the best W is 2 or 3.
we could not find a consistently optimal value of d .
contrasting
train_7657
We make fairer precision evaluation in the next section.
since several related works make evaluation in this setting, we also present precision for reference.
contrasting
train_7658
In this example, we permit " (overseas study)".
we reject " (short-term overseas)" since it does not compose any constitutent in the compound word.
contrasting
train_7659
We try to perform the same experiment.
we cannot get same version of the corpus.
contrasting
train_7660
This seems natural because words are used as a processing unit in the Markov model-based method, and therefore much information about known words (e.g., POS or word bigram probability) can be used.
unknown words cannot be handled directly by this method itself.
contrasting
train_7661
Clearly, this evaluation is at a coarser level of granularity than that required for our final system.
we find it useful for the following reasons: Owing to the number and diversity of newsgroups on the Internet, we can perform controlled experiments where we vary the degree of similarity between newsgroups, thereby simulating discussions with different levels of relatedness.
contrasting
train_7662
Alternatively, one could count all occurrences of a word in a posting, which could be useful for constructing more detailed stylistic profiles.
at present we are mainly concerned with words that appear across postings.
contrasting
train_7663
This suggests that the signature words create undesirable overlaps between the clusters.
when filtering is used, the clustering procedure reaches its best performance with ¤ ¡ , where the performance is extremely good.
contrasting
train_7664
(Lewis, 1992;Dumais et al., 1998), or augmenting the standard BoW approach with synonym clusters or latent dimensions (Baker and Mc-Callum, 1998;Cai and Hofmann, 2003).
none of the more elaborate representations manage to significantly outperform the standard BoW approach (Sebastiani, 2002).
contrasting
train_7665
If we compare the best BoW run (using the linear kernel and tf × idf -weighting) and the best BoC run (using 5,000-dimensional vectors with the polynomial kernel and tf × idf -weighting), we can see that the BoW representations barely outperform BoC: 82.77% versus 82.29%.
if we only look at the results for the ten largest categories in the Reuters-21578 collection, the situation is reversed and the BoC representations outperform BoW.
contrasting
train_7666
The semantic category assigned to a noun holds the information used for this type of disambiguation.
4.2.4 Aggregation of synonymous expressions to disambiguation, aggregation of synonymous expressions is important to organize extracted sentiment units.
contrasting
train_7667
The recall was not so high, especially in the MT method, but according to our error analysis the recall can be increased by adding auxiliary patterns.
it is almost impossible to increase the precision without our deep analysis techniques.
contrasting
train_7668
Comparisons of automatic evaluation metrics for machine translation are usually conducted on corpus level using correlation statistics such as Pearson's product moment correlation coefficient or Spearman's rank order correlation coefficient between human scores and automatic scores.
such comparisons rely on human judgments of translation qualities such as adequacy and fluency.
contrasting
train_7669
The other potential problem for correlation analysis of human vs. automatic framework is that high corpus-level correlation might not translate to high sentence-level correlation.
high sentence-level correlation is often an important property that machine translation researchers look for.
contrasting
train_7670
This example illustrated that ROUGE-L can work reliably at sentence level.
lCS suffers one disadvantage: it only counts the main insequence words; therefore, other alternative lCSes and shorter sequences are not reflected in the final score.
contrasting
train_7671
According to Table 3, we find that shorter BLEUS has better correlation with adequacy.
correlation with fluency increases when longer ngram is considered but decreases after BLEUS5.
contrasting
train_7672
In evaluation of applications presupposing parsing, it is helpful to separate errors due to parsing from intrinsic errors.
one would also like to gauge the end-to-end performance of a system.
contrasting
train_7673
As stated in the previous section, utterances training are approximately 2.5 times faster than the test utterances.
they are expected to achieve a very low accuracy.
contrasting
train_7674
In this experiment, the number of states for a long vowel phoneme is varied from 3 to 6 states.
the numbers of states for the other phonemes are set to 3 states.
contrasting
train_7675
In a single document, there will be few sentences with the same content.
in multiple documents with multiple sources, there will be many sentences that convey the same content with different words and phrases, or even identical sentences.
contrasting
train_7676
Therefore, we believe that second type of extract is superior and thus we prepared the extracts in that way.
as stated in the previous section, with multiple document summarization, there may be more than one sentence with the same content, and thus we may have more than one set of sentences in the original document that corresponds to a given sentence in the abstract; that is to say, there may be more than one key datum for a given sentence in the abstract 5 .
contrasting
train_7677
Utilizing hard matching pattern rules could obtain precise results from test instances.
the approach is problematic in dealing with natural language text, such as news articles, which often exhibits great variations in both lexical and syntactic constructions.
contrasting
train_7678
In this paper, we aim to minimize the number of hand-tagged training instances needed to start the learning process by adopting a bootstrapping strategy such as that proposed in Riloff and Jones (1999).
to the existing work, we propose a weakly supervised IE framework which takes advantages of both soft and hard matching pattern rules in both the training and test phases.
contrasting
train_7679
One such solution might be a summary of that very email discussion.
it would be much more useful if the summary did not just tell the user what the thread is about.
contrasting
train_7680
2 Background: Email Threads The focus of this paper is on email discussions supporting a group decision-making process.
to studies on individual email usage (for an overview see: Ducheneaut and Bellotti, 2001), this research area has been less explored.
contrasting
train_7681
Occasionally, such discussions end with an online vote.
ducheneaut and Belotti do note that voting is relatively infrequent and our own experience with our email corpora tends to support this.
contrasting
train_7682
The set of group tasks facilitated by the email correspondence were: decision-making, information provision, requests for action and social conversation.
it is natural for the group to engage in multiple tasks.
contrasting
train_7683
They extract sentences based on the presence of subject line key words.
should the subject line not reflect the content of the thread, our method has the potential to extract the true discussion issue since it based on the responses of other participants.
contrasting
train_7684
For such threads, we attempt to extract the sentence participants respond to.
again, this may not be the best formulation of the issue.
contrasting
train_7685
The vector computed by the SVD centroid method provides information about the replies and accounts for word associations such as synonyms.
like the centroid method, this vector will include all topics discussed in the replies, even small digressions.
contrasting
train_7686
In this solution, we applied the issue detection algorithm to the reply email in question.
it turns out that most of the tagged responses occurred at the start of each reply email and a more complex approach was unnecessary and potentially introduced more errors.
contrasting
train_7687
When comparing the oracle methods which returned more than one sentence against the n=3 baseline, we found no significant difference in recall.
when comparing precision performance we found that the difference between the precision of Centroid method and the three oracles were significantly different compared to the baseline.
contrasting
train_7688
Derived from his supervised transformation-based tagger (Brill, 1992), UTBL uses information from the distribution of unambiguously tagged data to make informed labeling decisions in ambiguous contexts.
to the HMM taggers previously described, which make use of contextual information coming from the left side only, UTBL considers both left and right contexts.
contrasting
train_7689
where σ 2 k is the variance for feature dimension k. The variance can be feature dependent.
for simplicity, constant variance is often used for all features.
contrasting
train_7690
In addition, compared to simple models like n-gram language models (Teahan et al., 2000), another shortcoming of CRF-based segmenters is that it requires significantly longer training time.
training is a one-time process, and testing time is still linear in the length of the input.
contrasting
train_7691
Terminological processing has long been recognised as one of the crucial aspects of systematic knowledge acquisition and of many NLP applications (IR, IE, corpus querying, etc.).
term variation has been under-discussed and is rarely accounted for in such applications.
contrasting
train_7692
morphological) is based on stemming: if two term forms share a stemmed representation, they are considered as mutual variants (Jacquemin and Tzoukermann, 1999;.
stemming may result in ambiguous denotations related to "overstemming" (i.e.
contrasting
train_7693
Still, using only the patterns from Table 3, we have correctly extracted 35.76% of all GENIA coordinated terms, with more than a half of all suggested candidates being found among those that appeared exclusively in coordinations.
these patterns also generated a number of false coordination expressions, and consequently a number of false term candidates.
contrasting
train_7694
And using just transliteration information alone, 9 Chinese words have their correct English translations at rank one position.
using our method of combining both sources of information and setting M = ∞, 19 Chinese words (i.e., the first 22 Chinese words in Table 3 except 巴佐亚,坩埚,普利法) have their correct English translations at rank one position.
contrasting
train_7695
For example, the work of (Al-Onaizan and Knight, 2002a;Al-Onaizan and Knight, 2002b;Knight and Graehl, 1998) used only the pronunciation or spelling of w in translation.
the work of (Cao and Li, 2002;Fung and Yee, 1998;Rapp, 1995;Rapp, 1999) used only the context of w to locate its translation in a second language.
contrasting
train_7696
On the other hand, the work of (Cao and Li, 2002;Fung and Yee, 1998;Rapp, 1995;Rapp, 1999) used only the context of w to locate its translation in a second language.
our current work attempts to combine both complementary sources of information, yielding higher accuracy than using either source of information alone.
contrasting
train_7697
This may look linguistically counter-intuitive.
(Koehn et al 2003) found that it is actually harmful to restrict phrases to constituents in parse trees, because the restriction would cause the system to miss many reliable translations, such as the correspondence between "there is" in English and "es gibt" ("it gives") in German.
contrasting
train_7698
5(c), both of these rules can be applied to parts of the tree.
they cannot be used at the same time as they translate same to different words and place them on different location.
contrasting
train_7699
A separate generation module, which often involves some target language grammar rules, is used to linearize the words in the target parse tree.
our transfer rules specify linear order among nodes in the rule.
contrasting