id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_7800
The resulting semantic formalism is very similar to the type-theoretic language L λ (Dowty et al., 1981).
we merely use the lambdacalculus as a tool for constructing semantic representations, rather as a formal tool for modeltheoretic interpretation.
contrasting
train_7801
In Figure 3, for example, there are different potential standards that would admit either both squares or neither square as small.
we can rule out these candidate standards in interpreting (3).
contrasting
train_7802
In order to perform effective search, we employ question transformation to get effectual web queries.
this is not a trivial task.
contrasting
train_7803
Given the excellent results posted by the best systems and an adequate performance attained even by some entry-level system, we believe that the process of factoid question answering is now fairly well understood (Harabagiu et al., 2002;Hovy et al., 2000;Prager at al., 2001, Wu et al., 2003.
to a factoid question, an analytical question has a virtually unlimited variety of syntactic forms with only a loose connection between their syntax and the expected answer.
contrasting
train_7804
With the optimal choice of the four words, we get results that are similar to those of Pantel and Lin: their precision is equal to 60.8 with an average number of words defining a sense equal to 14.
table 4 shows that the words selected on the basis of their number of strong links are not strongly linked in WordNet (according to Lin's measure) to their target word.
contrasting
train_7805
These compounds are undoubtedly included for a reason, yet the idea that literal compounds might actually be essential to WordNet's usefulness may strike some as heretical on at least two fronts: first, the lexicon is a finite resource, while the space of compounds is potentially infinite; and at any rate, literal compounds can be created as needed from purely compositional principles (Hanks, 2004).
these retorts are valid only if we view WordNet as a dictionary, but of course it is much more than this.
contrasting
train_7806
We can transform this into another concept M'-H by replacing M with any M' for which: This formulation may suggest a large range of values of M'.
these candidates can be sorted by A rel (M, M'), which estimates the probability that a given M'-H will later be validated as useful.
contrasting
train_7807
It follows then that the novel compounds apple-pizza, chocolate-pizza, taco-pizza, and curry-pizza will all be internally validated as meaningful (if not necessarily enjoyable) varieties of pizza.
the external validation set for a compound M-H is the set of distinct documents that contain the compound term "M H", as acquired using a web search engine.
contrasting
train_7808
These possibilities can be diminished by seeking a large enough sample set, but this has the effect of setting the evidential bar too high for truly creative compounds.
another solution lies in the way that the results of external validation are actually used, as we shall later see.
contrasting
train_7809
Experiments have also shown that the use of menu-based justifications help students learn with greater understanding over a tutor that does not ask for justifications (Aleven and Koedinger 2002).
there is room for improvement.
contrasting
train_7810
An obvious case is that of synonyms.
there are cases when different words are used as synonyms only in certain contexts.
contrasting
train_7811
chunks and named entities) besides the words within the questions.
the main disadvantage of relying on semantic analyzers, named entity taggers and the like, is that for some languages these tools are not yet well developed.
contrasting
train_7812
For instance Zhang and Sun Lee (Zhang and Lee, 2003) reported an accuracy of 90% for English questions, while Li and Roth (Li and Roth, 2002) achieved 98.8% accuracy.
they used a training set of 5,500 questions and a test set of 500 questions, while in our experiments we used for training 405 for each 45 test questions (10-fold-cross-validation).
contrasting
train_7813
Ideally, we would like to compare our system with other summarizers.
due to the unavailability of other summarization systems to perform the same task, we designed three baseline methods, namely lead-based, randomly-selected, and random-lead-based, to generate summaries for performance comparison, which were also adopted by (Brandow et al.
contrasting
train_7814
Several parsers have been implemented in various grammar formalisms and empirical evaluation has been reported: LFG (Riezler et al., 2002;Cahill et al., 2002;Burke et al., 2004), LTAG (Chiang, 2000), CCG (Hockenmaier and Steedman, 2002b;Clark et al., 2002;Hockenmaier, 2003), and HPSG (Miyao et al., 2003;Malouf and van Noord, 2004).
their accuracy was still below the state-of-theart PCFG parsers (Collins, 1999;Charniak, 2000) in terms of the PARSEVAL score.
contrasting
train_7815
Since deep parsers can output deeper representation of the structure of a sentence, such as predicate argument structures, several studies reported the accuracy of predicateargument relations using a treebank developed for each formalism.
resources used for the evaluation were not available for other formalisms, and the results cannot be compared with each other.
contrasting
train_7816
However, lexical alternations and arbitrariness of assignments of argument labels will be a problem when we directly compare the output of an HPSG parser with the PropBank.
we can see that the remaining disagreements are about the labels of argument labels.
contrasting
train_7817
out of the 400 samples, they agreed only on 294 for action-effect and 297 for action-means.
a closer look at the results revealed that the judgements of the one annotator were considerably but very consistently more tolerant than the other.
contrasting
train_7818
Mapping verb instances to their VN classes has been proven useful for several NLP tasks.
verbs are polysemous with respect to their VN classes.
contrasting
train_7819
The similarity to WSD suggests that our task might be solved by WN sense disambiguation followed by a mapping from WN to VN.
good results are not to be expected, due to the medium quality of today's WSD algorithms and because the mapping between WN and VN is both incomplete and many-to-many 7 .
contrasting
train_7820
We follow common practice and use the standard reference Dirichlet prior, which is uniform on θ, such that α j = 1 for all j.
to the model above, a hierarchical sampling model assumes thatθ varies between documents, and has distribution which depends upon parameters η.
contrasting
train_7821
Thus p(d|α) (using the standard α notation for Dirichlet parameters) has distribution: Maximum likelihood estimates for the α are difficult to obtain, since the likelihood for α is a function which must be maximised for all components simultaneously, leading some authors to use approximate distributions to improve the tractability of maximum likelihood estimation (Elkan, 2006).
we reparameterise the Dirichlet compound multinomial, and estimate some of the parameters in closed form.
contrasting
train_7822
To reliably estimate variance (and arguably rate as well) would require words to occur a sufficient number of times.
this section will demonstrate that two of the datasets have many words which do not occur with sufficient frequency to estimate parameters, and in those two the linear SVM's performance is more comparable.
contrasting
train_7823
We attribute this to the capacity of the proposed model to capture aspects of word behaviour be-yond a simpler model.
in cases where the data contain many infrequent words and the tendency to reuse words is relatively low, defaulting to a linear classifier (either the multinomial for a generative classifier, or preferably the linear SVM) increases performance relative to a more complex model, which cannot be fit with sufficient precision.
contrasting
train_7824
The eigenvalue decomposition of B now incorporates this extra term information provided in D 1 , and the eigenvectors show stronger correspondence between those terms indicated.
with each term aligned with one or more other terms, the row and column norms of D 1 are unequal, which means that some terms may be biased to appear more heavily in the eigenvectors.
contrasting
train_7825
Other distributions could be given, e.g., one which gives more mass to adjunct categories such as (S\NP)\(S\NP) than to ones which are otherwise similar but do not display such symmetry, like (S/NP)\(NP\S).
the most important thing for present purposes is that simpler categories are more likely than more complex ones.
contrasting
train_7826
This property of CCG of supporting multiple derivations of the same analysis has been termed spurious ambiguity.
the extra constituents are anything but spurious: they are implicated in a range of CCG (along with other forms of categorial grammar) linguistic analyses, including coordination, long-distance extraction, intonation, and incremental processing.
contrasting
train_7827
Furthermore, using the lexical category distribution (ΨΛ) to create the transition initialization provides a better starting point than the uniform one (ΨU), espe- 8 CCGbank actually corrects many errors in the Penn Treebank, and does not suffer as much from mistagged examples.
there were two instances of an ill-formed category ((S[b]\NP)/NP)/ in wsj 0595 for the words own and keep.
contrasting
train_7828
This result may seem counterintuitive since neighbors provided by a semantic resource are based on expert knowledge and are often more accurate than those obtained automatically.
semantic resources like WordNet are designed to be as general as possible without a specific corpus or domain in mind.
contrasting
train_7829
Also note that improvements in supervised methods can be expected to automatically translate to improvements in unsupervised Interestingly, label propagation performed relatively poorly on the manually labeled data.
it ranks highly when using the automatic labels.
contrasting
train_7830
Finally we have adopted Tsovaltzi and Karagjosova's top-level structure, with Task as a dimension.
we observe that it would be equally valid and more in keeping with the original DAMSL categorisation of utterance functions to make use of the existing Task sub-category of the Info-level dimension.
contrasting
train_7831
There are many supervised learning methods for training high-performance dependency parsers (Nivre et al., 2007), if given sufficient labeled data.
the performance of parsers declines when we are in the situation that a parser is trained in one "source" domain but is to parse the sentences in a second "target" domain.
contrasting
train_7832
The CoTrain system was similar to the learning scheme described by Sagae and Tsujii (2007).
we did not use the same parsing algorithms as the ones used by Sagae and Tsujii (2007) Table 5: The results of several adaptation methods with CKIP trained a forward parser (same as our baseline system) and a backward parser.
contrasting
train_7833
The best results here, for n=4, are essentially no better on average than those obtained with standard LSA.
averaging across languages obscures the fact that results for Arabic have significantly improved (for example, where Arabic documents are used as queries, MP5 is now 0.6205 instead of 0.4456).
contrasting
train_7834
(2004) present an end-to-end paraphrasing system inspired by phrase-based machine translation that can both acquire paraphrases and use them to generate new strings.
their model is limited to lexical substitution -no reordering takes place -and is lacking the compression objective.
contrasting
train_7835
The great advantages of such systems are their robustness and efficient processing, which make them highly suitable for real-life grammar and style checking applications.
since shallow modules usually cannot provide a full syntactic analysis, the coverage of these systems is limited to error types not requiring a broader (nonlocal) syntactic context for their detection.
contrasting
train_7836
The passive sentences found by Checkpoint are actually passive sentences.
these were not annotated as passives, because the annotators were told to annotate only those stylistic errors for which a paraphrase was possible.
contrasting
train_7837
Obviously, this fact has severely hampered the state-of-the-art of advanced NLP applications.
the Princeton WordNet (WN) is by far the most widely-used knowledge base (Fellbaum, 1998).
contrasting
train_7838
As expected, each semantic resource has different volume and accuracy figures when evaluated in a common and controlled framework (Cuadros and Rigau, 2006).
not all these large-scale resources encode semantic relations between synsets.
contrasting
train_7839
The algorithm finishes when no more pending words remain in P. Algorithm 1 SSI-Dijkstra Algorithm SSI (T: list of terms) for each {t ∈ T } do Initially, the list I of interpreted words should include the senses of the monosemous words in W, or a fixed set of word senses 5 .
when dis-ambiguating a TS of a word sense s (for instance party#n#1), the list I already includes s. In order to measure the proximity of one synset to the rest of synsets of I, we use part of the knowledge already available to build a very large connected graph with 99,635 nodes (synsets) and 636,077 edges.
contrasting
train_7840
Note that in WSD evaluation frameworks, this is a very basic baseline.
in our evaluation framework, this "WSD baseline" could be considered as an upper-bound.
contrasting
train_7841
As expected, KnowNet-5 obtain the lower results.
it performs better than WN (and all its extensions) and spBNC.
contrasting
train_7842
One simple way of verifying the latter case is by looking at the number of senses assigned to the prepositions by a resource Table 5: Confusion matrix for L1 data -prepositions such as the Oxford English Dictionary.
we find no good correlation between the two as the preposition with the most senses is of (16), and that with the fewest is from (1), thus negating the idea that fewer senses make a preposition easier to learn.
contrasting
train_7843
This suggests that the frequency effect is not so strong as to over- Table 9: Confusion matrix for L1 determiners ride any true linguistic information the model has acquired, otherwise the predominant choice would always be the null case.
these results show that the model is indeed capable of distinguishing between contexts which require a determiner and those which do not, but requires further fine tuning to perform better in knowing which of the two determiner options to choose.
contrasting
train_7844
For example, a common source of confusion in learners is between by and from, as in I like it because it's from my favourite band.
this confusion is not very frequent in the model, a difference which could be explained either by the fact that, as noted above, performance on from is very low and so the classifier is unlikely to suggest it, or that in training the contexts seen for by are sufficiently distinctive that the classifier is not misled like the learners.
contrasting
train_7845
A more detailed discussion of the issues arising from the comparison of confusion pairs cannot be had here.
in noting both divergences and similarities between the two learners, human and machine, we may be able to derive useful insights into the way the learning processes operate, and what factors could be more or less important for them.
contrasting
train_7846
For this, we could take all framed nuclei from a corpus and compare the level of ambiguity for differing abstractions.
most framed nuclei occur only once, and it is not clear how meaningful it is to say that these are unambiguous.
contrasting
train_7847
Carpenter and Morrill (2005) provided a graph representation and a dynamic programming algorithm for parsing in the Lambek calculus with product.
due to there use of the Lambek calculus with product and to their choice of correctness conditions, they did not obtain a polynomial time algorithm for any significant fragment of the calculus.
contrasting
train_7848
The algorithm as described above is a method for answering the decision problem for sequent derivability in the Lambek calculus.
we can annotate the ATGs with the ATGs they are derived from so that a complete set of Roorda-style proof nets, and thus the proofs themselves, can be recovered.
contrasting
train_7849
For example, the technology for identifying paraphrases would play an important role in aggregating the wealth of uninhibited opinions about products and services that are available on the Web, from both the consumers and producers viewpoint.
whenever we draw up a document, we always seek the most appropriate expression for conveying our ideas.
contrasting
train_7850
0 ≤ α ≤ 1 is a parameter for approximating KL divergence D. The score can be recast into a similarity score via, for example, the following function (Fujita and Sato, 2008): This measure offers an advantage: the weight for each feature is determined theoretically.
the optimization of α is difficult because it varies according to the task and even the data size (confidence of probability distributions).
contrasting
train_7851
This result is quite natural because MDS cannot verify the collocation between content words in those cases where a number of function words appear between them.
cFDS with N = 3 could verify this as a result of treating the sequence of function words as a single node.
contrasting
train_7852
Moreover, as indicated by the double-headed arrows in the figure, there are a number of bilingual collocations.
as shown in Figure 1, the Reuters document is classified into "Science and Technology," while the Mainichi document is classified into "Space Navigation".
contrasting
train_7853
As a result, "U.S. Treasury has no comment on Hashimoto fx remarks" in Reuters category "Forex markets" and the document "Hashimoto" are not retrieved by a single hierarchy approach.
in the integrating method, these two documents are classified correctly into a pair of similar categories, i.e., the "U.S Treasury" is classified into Reuters category "Forex markets", and the "Hashimoto" is classified into Mainichi category "Money and banking".
contrasting
train_7854
As shown in Table 6, the performance by integrating hierarchies was much better than that of the non-hierarchical approach, and slightly better than those obtained by a single hierarchy.
correct retrieved collocations were different from each other.
contrasting
train_7855
It is interesting to note that 12 of 154 collocations, such as "earn medal" and "block shot" obtained by integrating hierarchies were also obtained by a single hierarchy approach.
other collocations such as "get strikeout" and "make birdie" which were obtained in a particular category (Sport, Baseball) and (Sport, Golf), did not appear in either of the results using a single hierarchy or a non hierarchical approach.
contrasting
train_7856
Thus, positive and negative opinions are known as they are separated by reviewers.
they cannot be used directly because Pros and Cons seldom contain comparative words.
contrasting
train_7857
In the case of Type 2 comparatives, the situation is similar.
the comparative word ("more", "most", "less" or "least"), the adjective/adverb and the feature are all important in determining the opinion or the preference.
contrasting
train_7858
Since we use Pros and Cons as the external information source to help determine whether the combination of a comparative and an entity feature is positive or negative, we need to find comparative and entity features words in Pros and Cons.
in Pros and Cons, comparatives are seldom used (entity features are always there).
contrasting
train_7859
The result decides the preferred entity.
point-wise mutual information (pMI) is commonly used for computing the association of two terms (e.g., Turney 2002), which is defined as: we argue that pMI is not a suitable measure for our purpose.
contrasting
train_7860
We are more interested in the conditional probability of C (including its synonyms) given F, which is essentially the confidence measure in traditional data mining.
confidence does not handle well the situation where C occurs frequently but F appears rarely.
contrasting
train_7861
In XMG, node names are by default local to a class.
explicit IMPORT and EXPORT declarations can be used to make names "visible" to children classes.
contrasting
train_7862
Due to our small corpus, we had to limit the Table 5: Performance of other models number and the complexity of the features we use, since the more features, the more sparse the data, and the more training data needed.
we aim to expand the feature set with more finegrained features.
contrasting
train_7863
Current state-of-the-art IE systems are mostly based on general purpose supervised machine learning techniques (e.g., kernel methods).
supervised systems achieve acceptable accuracy only if they are supplied with a sufficiently large amount of training data, usually consisting of manually annotated texts.
contrasting
train_7864
Several errors are due to lack of information in WordNet, For example, Leonhard Euler was a mathematician and physicist, however, in WordNet, he is classified as physicist, and our system classifies him as mathematician.
for simplicity, the algorithm returns a single category per instance, however, the test set contains many entities that are classified in more than one category.
contrasting
train_7865
The high WordNet::Domains results (DomEntropy and DomTop3%) probably reflect the fact that they are produced using the same resource as the creation of the test sets.
knowledge of correct senses is not required for the homogeneity measures and these scores indicate that they are capable nonetheless of capturing topic homogeneity.
contrasting
train_7866
The data under (1) and (2) shortly repeat this well-known pattern.
(1) be quiet for an hour / *in an hour swim for an hour / *in an hour (2) arrive *for an hour / in an hour build a tower *for an hour / in an hour after closer inspection, these claimed test cases do not seem to be as simple and clear cut as one would like, given their fundamental theoretical status.
contrasting
train_7867
The semantic material just combines normally via superposition, as can be seen in (11), and the algorithm finishes here.
(11) Λ(V tel in an hour) = a, ¬o, time(m) ¬a, ¬o + ¬a, o, time(n), hour(m,n) in case of an aspectual clash between preposition and event description, combining the concepts leads to a contradiction at some predetermined position inside the complex situational type, as happens in (12).
contrasting
train_7868
Overall our system is competitive, with best results for coverage (100%), second best for BLEU and SSA scores, and third best overall on exact match.
we admit that automatic metrics such as BLEU are not fully reliable to compare different systems, and results vary widely depending on the coverage of the systems and the specificity of the generation input.
contrasting
train_7869
The linear count expectations can be computed efficiently by the forward-backward recurrence for HMMs.
we have to design new algorithms for quadratic count expectations which will be done in the rest of this section.
contrasting
train_7870
The linguistically syntax-based models (Liu et al., 2006;Huang et al., 2006) can distinguish syntactic structures by parsing source sentence.
as an LHS tree may correspond to different RHS strings in different rules (the right two rules of Figure 1), these models also face the rule selection problem during decoding.
contrasting
train_7871
The MaxEnt RS model combines rich context information of grammar rules, as well as information of the subphrases which will be reduced to nonterminal X during decoding.
these information is ignored by Chiang's hierarchical model.
contrasting
train_7872
Here, the baseline translates the Chinese phrase " " into "booked" by using the rule: The meaning is not fully expressed since the Chinese word " " is not translated.
the MaxEnt RS model obtains a correct translation by using the rule: we also find that some results produced by the MaxEnt RS models seem to decrease the BLEU score.
contrasting
train_7873
As Chinese text is written without word boundaries, effectively recognizing Chinese words is like recognizing collocations in English, substituting characters for words and words for collocations.
existing topical models that involve collocations have a common limitation.
contrasting
train_7874
Better translation quality can be expected from pattern-based MT and example-based MT where the syntactic structure and semantics are handled together.
pattern-based MT require immense pattern dictionaries that are difficult to develop(Jung et al., 1999;Uchino et al., 2001).
contrasting
train_7875
The results show that the whole dictionary covered almost all input sentences.
many cases of matches to semantically inappropriate patterns occurred, and the semantic coverage decreased to 78% when these were eliminated.
contrasting
train_7876
This result also shows that the accuracies of the SR algorithm and CC algorithm are comparable when using the same features.
this does not mean that their substantial power is comparable because the parsing order limits the available dynamic features.
contrasting
train_7877
To the best of our knowledge, no research has been published on generating the SS given the FS of a Chinese couplet.
because our task can be viewed as generating the second line of a special type of poetry given the first line, we consider automatic poetry generation to be the most closely related existing research area.
contrasting
train_7878
Besides the usual local features such as the character-based ones (Xue and Shen, 2003;Ng and Low, 2004), many non-local features related to POSs or words can also be employed to improve performance.
as such features are generated dynamically during the decoding procedure, incorporating these features directly into the classifier results in problems.
contrasting
train_7879
This is based on this viewpoint: On the one hand, compared with the initial input character sequence, the pruned word lattice has a quite smaller search space while with a high oracle F-measure, which enables us to conduct more precise reranking over this search space to find the best result.
as the structure of the search space is approximately outlined by the topological directed architecture of pruned word lattice, we have a much wider choice for feature selection, which means that we would be able to utilize not only features topologically before the current considering position, just like those depicted in Table 2 in section 4, but also information topologically after it, for example the next word W 1 or the next POS tag T 1 .
contrasting
train_7880
This concept is intuitive when reasoning about the link between syntax and semantics, and it has been used earlier in semantic interpreters such as Absity (Hirst, 1983).
except from a few tentative experiments (Toutanova et al., 2005), grammatical function is not explicitly used by current automatic SRL systems, but instead emulated from constituent trees by features like the constituent position and the governing category.
contrasting
train_7881
As can be seen, the constituent-based systems outperform the dependency-based systems on average.
the picture is not clear enough to draw any firm conclusion about a fundamental structural difference.
contrasting
train_7882
Again, there are no clear differences that can be attributed to syntactic formalism.
this result is positive, because it shows clearly that SRL can be used in situations where only dependency parsers are available.
contrasting
train_7883
For example, from a consumers' forum on digital cameras, the underlined parts in Examples (3) and (4) from Section 1 apparently describe the writer's demands, so they are valuable information for such users of demand analysis such as the makers of digital camera.
the request in Example (5) does not express the author's demands about any digital camera, but rather it is written for other participants in the forum.
contrasting
train_7884
As dependency paths contain various words along with nouns and verbs, other methods often mentioned in the literature would be more difficult to use.
in the future we are going to extend this approach by using syntactically analyzed corpora and by estimating distributional similarity from it.
contrasting
train_7885
For example, the coordinate structure "((mail and securities) fraud)" is guided by the estimation that mail fraud is a salient compound nominal phrase.
the coordinate structure "(corn and (peanut butter))" is led because corn butter is not a familiar concept.
contrasting
train_7886
Since both formulae correspond with a set of animals in the domain, referential ambiguity can result.
the black sheep and goats is shorter and possibly more fluent.
contrasting
train_7887
Hence, it seems justifiable to have gre avoid such kind of ambiguities.
it also seems plausible that some readings may be very unlikely.
contrasting
train_7888
If a participant removes 2 red lions and 2 red horses, we count it as a wide-scope reading.
if (s)he removes all the horses we count it as a narrow-scope reading.
contrasting
train_7889
Active learning is a proven method for reducing the cost of creating the training sets that are necessary for statistical NLP.
there has been little work on stopping criteria for active learning.
contrasting
train_7890
The empirical probabilities for the adjustment table are estimated with leave-one-out on the training set.
since the training set is created by selective sampling, it will be biased.
contrasting
train_7891
He then suggests to use this peak confidence as the stopping criterion.
in our experiments with multiclass logistic regression, we could not find this peak pattern when calculating the confidence using the three uncertainty measures introduced above: 1-Entropy, Margin and MinMax.
contrasting
train_7892
The day after a debate, most papers may declare Bush the winner, yielding a rise in the price of a "Bush to win" share.
while the debate may be discussed for several days after the event, public opinion of Bush will probably not continue to rise on old news.
contrasting
train_7893
Expert algorithms for combining prediction systems have been well studied.
experiments with the popular weighted majority algorithm (Littlestone and Warmuth, 1989) yielded poor performance since it attempts to learn the optimal balance between systems while our setting has rapidly shifting quality between few experts with little data for learning.
contrasting
train_7894
In each market, the baseline news system makes a small profit, but the overall performance of the combined system is worse than the market history system alone, showing that the news baseline is ineffective.
all news features improve over the market history system; news information helps to explain market behaviors.
contrasting
train_7895
", this question can be easily classified into "NUM:distance".
the above SVM misclassified this question into "LOC:-state", as the words "state" and "Alaska" confused the classifier.
contrasting
train_7896
Previous summarization tasks are all targeted on a single document or a static collection of documents on a given topic.
the document collections can change (actually grow) dynamically when the topic evolves over time.
contrasting
train_7897
That means only the content of the sentences is considered.
for the tasks of queryoriented summarization, the reinforcement should obviously bias to the user's query.
contrasting
train_7898
In the process of accumulation, cQA services assume that questions always have unique best answers.
with an indepth analysis of questions and answers on cQA services, we find that the assumption cannot be true.
contrasting
train_7899
The best answer chosen by the asker is very good because it provides useful URL information where the asker can find her answers.
other answers listed a variety of birthdays of stars that also answered the question.
contrasting