id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_92800
Using an HMM that has been initialized with grammar-based transition probabilities, combined with a two-stage integer programming strategy, this approach can achieve single-best accuracy of 64.3% on ambiguous supertags.
the identifier and classifier models are then trained from these features, instead of from features obtained from gold-standard syntactic derivations.
neutral
train_92801
(2005a), and the F-measure was 66%.
we regarded the predicate of " (do) + particle + noun" such as " " in Example (4) as " (do)."
neutral
train_92802
When training with the full feature set, the suffix1=国 predicate is active for all of those country names and has a large feature weight associated with the preferred latent tag.
the no+simple row in table 2 represents the baseline, for which the grammars are trained without rare word smoothing described in Subsection 2.1 and OOV words are handled by the simple method described in Subsection 2.2.
neutral
train_92803
Ng (2010) presents a comprehensive review of recent approaches to within-document coreference resolution.
candidate ranking then considers each candidate in greater detail, producing a ranked list.
neutral
train_92804
Our approach depends on having a quantity of labelled data on which to train a classifier.
we apply English NER to find person names in text (Ratinov and Roth, 2009), our English entity linking system to identify candidate entity IDs, and English annotators on Amazon's Mechanical Turk to select the correct kbid for each name.
neutral
train_92805
For , we use the probabilities of alignment pairs which are computed by GIZA++.
they usually need manuallyannotated high performance textual corpora.
neutral
train_92806
We propose an improved latent topic model by introducing the category information of questions.
mAP rewards methods that return relevant questions early and also rewards correct ranking of the results.
neutral
train_92807
We frame the task of selecting event-relevant sentences as a binary classification problem over all sentences in the corpus.
our semi-supervised learning approach balances the pros and cons of both supervised and unsupervised approaches.
neutral
train_92808
Table 6 shows the results of the extrinsic evaluation.
unlike Sassano and Neubig et al.
neutral
train_92809
If we assume an unmarked character as substitution error of one voiced consonant to one voiceless consonant, the task of detecting an unmarked character can be considered as a kind of error correction.
most Japanese people regard the phrase to the Meiji and Taisho Era; we also use the phrase to intend the narrower sense.
neutral
train_92810
In Table 1, we present the statistics of the voiced consonants in "Kokumin-no-tomo" corpus which we will use for our evaluation.
the frequency of each Kanji character varies.
neutral
train_92811
Procedure 1 is a pseudocode representation of the OT method.
given a phoneme a x , we setP (a x ) to be the maximum likelihood estimate: where n ax is the number of times a x occurs in D train , n * = p∈Π n p , and α 0 is the unigram smoothing constant.
neutral
train_92812
Following the previous work (Zhao et al., 2006;Zhao et al., 2010), we employ the linear chain CRFs (Lafferty et al., 2001) as our learning model.
the best score in each column is in boldface.
neutral
train_92813
We also tried to use different lexicons, as well as representing the feature with only one POS tag with the highest frequency.
z&C 10 refers to zhang and Clark (2010).
neutral
train_92814
The phrase-level sentiment annotation focuses sentiment annotation on phrases not words with concerning that atomic units of expression is not individual words but rather appraisal groups (Whitelaw et al., 2005).
among the five studied algorithms, the T-S algorithm is best to be employed by the proposed LFS framework.
neutral
train_92815
We randomly divide the labeled data set into five folds so that each fold at least contains one example instance labeled by each attribute node.
to test which is the best to be employed by the proposed LFS framework, we further comparatively study five classic feature selection algorithms respectively based on document frequency (DF) (Manning et al., 2008), mutual information (MI) (Manning et al., 2008;Church and Hanks, 1990), χ 2statistic (CHI) (Manning et al., 2008), information gain (IG) (Mitchell, 1997), and term strength (t-S) (Wilbur and Sirotkin, 1992).
neutral
train_92816
In the HL-SOT approach, each target text is encoded by a vector in a globally unified d-dimensional index term space and is respectively labeled by different nodes 2 of SOT in a hierarchical manner.
the measurement for selecting terms as features should be calculated among nodes on the same hierarchy.
neutral
train_92817
The approach aims at classifying both sentiment expressions as well as their targets using a rich set of linguistic features.
we convert the tree structure to a linear sequence of relations between neighboring segments.
neutral
train_92818
By following the compositionality-based polarity decision rules, the polarity of a noun or a noun phrase that has no inherent polarity is determined by its modifier's polarity.
detecting the sentiment of a given grammatical unit such as phrase, clause or sentence has received much attention in opinion mining and sentiment analysis.
neutral
train_92819
By considering these two sentences together, we can see that a 'negative' sentiment is conveyed.
in some contexts, the polarity of the adjectival modifier may not always be correctly determined by such rules, especially when the adjectival modifier characterizes the noun so that its denotation becomes a particular concept or an object in customer reviews.
neutral
train_92820
To generate the n th word in document d we draw the topic id z n d from the document-specific topic distribution θ d , and then draw the word from the word distribution corresponding to the chosen topic φ zn d .
the training set consists of sentences annotated with the relations and their directionality.
neutral
train_92821
4) can improve the performance; and (2) whether the performance can be improved if using unlabeled data (see D k in Eq.
1) with all the features, and then remove one group of features each time.
neutral
train_92822
Not as expected, removing the length features does not lead to as a remarkable drop as removing other features (p-value=0.12).
when a user issues a query, recommending tweets of good quality has become extremely important to satisfy the user's information need: how can we retrieve trustworthy and informative posts to users?
neutral
train_92823
The performance was boosted with such a factor.
we adopt a sentiment lexicon (SentiWordNet) and collect the top 200 frequent emoticons from our tweet corpus to identify positive and negative sentiment words.
neutral
train_92824
To label the target language documents automatically we propose a method called crosslanguage guided clustering (CLGC).
second, a key assumption made in both these approaches is that the class labels across languages are completely shared.
neutral
train_92825
This cluster mapped to a concept in Hindi which had words such as " Ê ", " " and " " where the first two words are translations for the words "communication" and "retail" respectively.
an edge is added between every pair of vertices (C S i , C T j ) where the weight of the edge is given by Sim X (C S i , C T j ).
neutral
train_92826
For the relation descriptor extraction problem, however, we expect that there is either a single relation descriptor sequence or no such sequence.
let t i denote the POS tag of w i and p i denote the phrase boundary tag of w i .
neutral
train_92827
A relation descriptor may also contain multiple relations.
the space of label sequences should be reduced to only those that satisfy the above constraint.
neutral
train_92828
Based on our preliminary experiments, we have found that using a smaller set of general POS tags instead of the Penn Treebank POS tag set could slightly improve the overall performance.
while in linear-chain CRFs the correct label sequence competes with all possible label sequences for probability mass, for our task the correct label sequence should compete with only other valid label sequences.
neutral
train_92829
Table 1 illustrates the actual aaa, ac compressors, acheron, acrocyanosis, adelaide cbd, african population, agua caliente casino, al hirschfeld, alessandro nesta, american fascism, american society for horticultural science, ancient babylonia, angioplasty, annapolis harbor, antarctic region, arlene martel, arrabiata sauce, artificial intelligence, bangla music, baquba, bb gun, berkshire hathaway, bicalutamide, blue jay, boulder colorado, brittle star, capsicum, carbonate, carotid arteries, chester arthur, christian songs, cloxacillin, cobol, communicable diseases, contemporary art, cortex, ct scan, digital fortress, eartha kitt, eating disorders, file sharing, final fantasy vii, forensics, habbo hotel, halogens, halophytes, ho chi minh trail, icici prudential, jane fonda, juan carlos, karlsruhe, kidney stones, lipoma, loss of appetite, lucky ali, majorca, martin frobisher, mexico city, pancho villa, phosphorus, playing cards, prednisone, right to vote, robotics, rouen, scientific revolution, self-esteem, spandex, strattera, u.s., vida guerra, visual basic, web hosting, windsurfing, wlan Table 2: Set of 75 target instances, used in the evaluation of instance attribute extraction ranked lists available in the repository for various phrases.
each query is accompanied by its frequency of occurrence in the query logs.
neutral
train_92830
If the input set of queries increased, additional candidate attributes would be extracted.
"kelley blue book value of 2008 dodge charger" and "kelley blue book value of 2008 honda civic" can be grouped into a shared query template "kelley blue book of 2008 ⋆", whose slot ⋆ is filled by the names of car models.
neutral
train_92831
Besides above reasons, 280 out of 1,916 queries did not exist in clickthrough logs, resulting in our system not being able to extract the correct query.
murayama and Okumura (2008) formulated the process of generating Japanese abbreviations by noisy channel model but they did not handle abbreviation expansion.
neutral
train_92832
These queries are possibly synonyms with the input query and thus possible to correct without semantic transformation.
we extracted 50 candidates from clickthrough logs and then reranked using three methods: 1.
neutral
train_92833
Although these are high-quality corpora, they have some limitations: (1) they tend to be domain specific (e.g., government related texts); (2) they are available in only a few languages; and (3) sometimes they are not free or there is some restriction for using them.
to measure the quality of the BS Detector, we manually labeled 200 Web sites (100 positive and 100 negative) from the dmoz directory in topics related to Spanish speaking countries.
neutral
train_92834
This part is an engineering task where the mapping should be best fitted to the corpus.
this model was motivated by Collins (1999)'s head-outward dependency model and Hockenmaier (2003)'s generative model for parsing CCG.
neutral
train_92835
If both feature vectors contain only flat features, the subtraction is straightforward, since each flat feature is real-valued.
to reduce a ranking problem to an equivalent classification problem, we need to convert the training set for the joint CR model to an equivalent training set that can be used to train a classifier.
neutral
train_92836
's (2001) method for creating training instances.
to do so, SVM light needs to position the hyperplane so that an instance with a higher rank in t is assigned a more positive value by the hyperplane than one with a lower rank in t .
neutral
train_92837
Given an instance i(NULL, NP k ), we extract the substructure from the parse tree containing NP k as follows.
b 3 and CEAF F-measure scores rise by 1.0% and 1.9%, respectively.
neutral
train_92838
", which show the error reduction of a system relative to the Baseline joint CR model.
this baseline model does not employ any tree-based or path-based features.
neutral
train_92839
A potential issue is the proposed model might be sus-ceptible to the sparseness issue.
another line of research treats sentence compression as machine translation, in which tree-based translation models have been developed (Galley and McKeown, 2007;Cohn and Lapata, 2008;Zhu et al., 2010).
neutral
train_92840
In Section 4 we introduce the decoding algorithm.
sLFs is an instance of the broader spectrum of text-to-text generation problems, which includes summarization, sentence compression, paraphrasing, and sentence fusion.
neutral
train_92841
We explore three approaches to incorporate the query information in theme-based summarization, including query-driven cluster ranking, query-embedding similarity measure and semi-supervised clustering.
sentence clustering requires the affinity matrix that is built upon the cosine similarity between the two sentences.
neutral
train_92842
When the given documents are all supposed to be about the same topic, they are very likely to repeat some important information in different documents or in different places of the same document.
the former document set contains 10 documents about 'Labor Dispute in National Basketball Association', while the latter one contains 25 documents about 'Art and music in public schools' and the corresponding query is to 'Describe the state of teaching art and music in public schools around the word, indicate problems, progress and failures'.
neutral
train_92843
"Yes", "Well"), set phrases (e.g.
we used the texts as dialogue data in the experiments.
neutral
train_92844
is percent correct calculated by (T-S-D)/T, and W.A.
the result indicates that the AS method (without RFU) can eliminate uninformative frequent utterances when generating an indicative summary or informative summary with a low compression rate.
neutral
train_92845
For example, the generative process of lecture speech, with regards to a hierarchical structure (here, bullet trees), is char-acterized in general by a speaker's producing detailed content for each bullet when discussing it, during which sub-bullets, if any, are talked about recursively.
this type of approaches, however, are unlikely to recover semantic structures more detailed than slide boundaries.
neutral
train_92846
As far as time complexity is concerned, the graph-partitioning models discussed above are quadratic with regards to N , i.e., O(M N 2 ), where M ≪ N ; M and N denoting the number of bullets and utterances, respectively, with the loop kernel computing and filling D[i, j, k] in equation 3, which is a M × N × N matrix.
the cohesion conveyed by the repetition of the words that appear in transcripts but not in slides could be additionally helpful; this is very likely to happen considering the significant imbalance of text lengths between bullets and transcripts, from which the alignment models by themselves may suffer.
neutral
train_92847
These differences are statistically significant (p < 0.05).
our main goal is to efficiently identify questions covering a wide range of topics while matching a certain style, often represented by colloquial textual fragments and therefore consisting of frequent words.
neutral
train_92848
We only need to use a small amount of annotated development data, 500 articles in our study to guide the instance selection to achieve similar performance as with unannotated test set being development data.
wikipedia requires contributors to assign categories to each article, which are defined as "major topics that are likely to be useful to someone reading the article".
neutral
train_92849
However, from our observation some categories in Wikipedia may not be suitable to model the topics of a document.
in the instance selection process, we use the KB entries with more than 15 linked documents in the auto-generated data as our initial Training Set (1,800 instances) to train a classifier, and then use this classifier to select instance from the auto-generated dataset.
neutral
train_92850
Another fact is that the selection process is similar to active learning, which needs to manually annotate the selected instances in each batch.
the Stanford topic Model toolbox .
neutral
train_92851
For the computational cost, we note that document annotation is performed offline, while queries are typically short and thus query annotation and expansion could be done quickly.
each term is a Ne feature, a WW feature, or a keyword.
neutral
train_92852
Next, other nodes (web pages) of their network are activated.
mihalcea and moldovan (2000) use senses in both queries and documents, and all forms of every hypernym of a sense in a document.
neutral
train_92853
Besides, there is latent information of the interrogative words Who, What, Which, When, Where, or How in a query.
a statistical significance test is required (Hull, 1993).
neutral
train_92854
In the future, we will apply CLGVSM in more languages pairs and extend it in more than two languages.
experimental results on benchmarking data set show that (1) the proposed CLGVSM is very effective for cross-document clustering, outperforming the two strong baselines vector space model (VSM) and latent semantic analysis (LSA) significantly; and (2) the new feature selection method can further improve CLGVSM.
neutral
train_92855
In that case, the associate matrix becomes sparse and computational time can be saved.
dictionary and corpus are two popular ways to get cross-language information.
neutral
train_92856
Due to the quantity of spam postings is small, after calculating the similarity, the cluster which has the least postings may be filtered as the spam.
let i A represent the set of articles that are managed in a system-generated cluster , is the set of articles managed in a human-generated cluster .
neutral
train_92857
Our approach is similar to HAC in nature.
it is not known whether the clustering algorithms are effective in microblog TD.
neutral
train_92858
Once a thread text is assigned a topic label, microblog texts in this thread are all assign the label.
a clean and topic-related thread is obtained from thread .
neutral
train_92859
Plural nouns are a property of general sentences.
some examples of sentences with different agreement levels are shown in Table 5.
neutral
train_92860
Table 9: Mean value of confidence score on correct predictions Here, after combining the examples, the classifier learns the lexical features indicative of both types of articles.
to efforts in automatic discourse processing (Marcu and Echihabi, 2001;Sporleder and Lascarides, 2008), in our work we are not interested in identifying adjacent sentences between which this relation holds.
neutral
train_92861
According to their significance to sentiment classification, we categorize the POS tags into four groups, as shown in Table 1.
the distribution of O changes the least.
neutral
train_92862
Using different supervised learning classifiers on various sizes of training datasets, we improved and demonstrated the efficiency of our feature set by adding syntactic patterns extracted from POS tags.
small but comprehensive feature set: we only use a small set of features to describe a cita-tion sentence.
neutral
train_92863
We argue that one major reason behind is a missing common understanding of genres and that we need to focus on the single aspects of genres in order to overcome this situation.
nevertheless, Figure 3b also conveys that we achieved 13.1% less accuracy in the smartphone domain than in the music domain (68.8% vs. 81.9%).
neutral
train_92864
Though precision significantly dropped for commercial, given a class imbalance of 1:9 (commercial:rest), 61.9% is still over five times better than the expected precision of guessing.
additionally, Boese and Howe (2005) recognized that, in the web, genres may evolve over time to other genres.
neutral
train_92865
For the writing style features, we determined the 48 most common words, the 55 most common part-of-speech trigrams, and the 35 most common character trigrams on the training set of each collection.
lFA is about the question why a text was written and, thus, refers to the authorial intention behind a text.
neutral
train_92866
Similar problems refer to the analysis of speaker intentions in conversation (Kadoya et al., 2005) and to the area of textual entailment (Michael, 2009), which is about the information implied by text.
many different genre classification schemes exist, which makes most approaches to genre identification badly comparable as a recent study showed (Sharoff et al., 2010).
neutral
train_92867
However, these three parts may belong to different domains, leading to distribution variations, which indicates that the features and weights learned from the data may be biasestimated.
the feature weight is further re-estimated based on both the development dataset and the test dataset with pseudo references.
neutral
train_92868
Although the development of decoding algorithms is a key topic in SMT research, if we are to construct better SMT systems it is also important to find a way to determine the weights of different model components.
the outer loop first runs the decoder over the source sentences in the tuning dataset with the current weights to generate N -best lists of translations.
neutral
train_92869
Finally, if we cannot find a new global best position around the current global best position in a certain number of iterations T c then the current global best position can be considered a local maximum.
hereafter, we assume a maximization problem.
neutral
train_92870
We first observe that all models predicting a target vocabulary get better BLEU than the baseline.
we also observed that MLP-64 systems are significantly better than all the other three systems.
neutral
train_92871
This sparsity helps the model to scale well to larger vocabulary, but it cannot model relations in translation beyond word cooccurrences.
we first tried with a fixed learning rates, but the results were disappointing.
neutral
train_92872
14 Recall that the true weight is e a i 7 Conclusions and Future Work In this paper, we presented a novel method of cross-adaptation based system combination which obtains statistically significant BLEU gains over best single system.
using the dissimilarity cross-adaptation as a second additional system helps slightly more, bringing the total gain to 0.45 BLEu (CN w/ CA+DCA).
neutral
train_92873
In this case, we instead measured the TER of the cross-adaptation output against the external input system hypotheses.
the GALE-only training data consists of 46 million words of LDC-released data plus 30 million words released by Sakhr Software.
neutral
train_92874
These features do cause some amount of overfitting on the tuning set, but we have not found this to be harmful to the test sets.
although our new method does produce a significant gain over the best single system, it does not perform as well as our confusion network decoding on any condition that we tried.
neutral
train_92875
The values {c The differentiation of this function with respect to ⃗ w can be performed in a fairly small number of steps using basic calculus, the details of which are provided in (Devlin, 2009).
10 we used the input hypotheses as reference translation and optimized in the opposite direction that we normally would.
neutral
train_92876
U# in the table shows the number of used unlabeled instances.
by applying the classifier form and training method presented in , we defined P(s|x) as We also provided the objective function, J for the parameter estimation of P(s k |x;W, Θ, β ) by using labeled and unlabeled datasets, Here, β (> 0) is a combination weight.
neutral
train_92877
Such sentences are informative, but in most case of paper dictionaries such as Iwanami, the examples are fragmentary to save spaces (in Iwanami, an average of 4 words).
features for each target word w, we used the surface form, the base form, the POS tag, and the coarse POS categories, such as nouns, verbs, and adjectives of w. Then we also used bag-of-words in the same sentence.
neutral
train_92878
We used all extracted labeled instances when the number was less than #.
we attempt to extract longer, more natural and high quality labeled data from the raw corpus, under strict conditions using fragmentary examples.
neutral
train_92879
Another type of method designed to make up for a lack of training data, is automatic labeled data expansion (Mihalcea and Moldovan, 1999;Agirre and Martinez, 2000) .
when using labeled data for semi-supervised learning, minimum expansion provides the best performance, and protects the system against sense bias.
neutral
train_92880
The main aim of this step is to obtain reliable labeled data even for unseen and infrequent senses.
no method based on given training data alone (Trn) can guess these senses correctly.
neutral
train_92881
Polysemy is the tendency for ConceptNet concepts to have multiple senses.
in addition, ConceptNet defines nearly thirty kinds of semantic relations, such as CapableOf (agent's ability), SubeventOf (event hierarchy), MotivationOf (affect), Desir-eOf (want to), and so on, most of which are not included in WordNet.
neutral
train_92882
This paper proposes a novel idea of combining WordNet and ConceptNet for WSD.
for ambiguous concepts with multiple senses, there are multiple nodes in the network.
neutral
train_92883
Of course, in real applications, it is impossible to know which kind of noises exist and where they are in advance.
although hypernymy, hyponymy, and meronymy/holonymy are transitive, and can generate the indirect WordNet resources, the number of meronymy/holonymy is far below than those of the other two in the WordNet.
neutral
train_92884
There are two kinds of noises: those from WSP of the correct sense, and those from WSP of the wrong sense(s).
obviously, this kind of noises have high NGD scores and would increase the score of the WSP of the correct sense, thus correspondingly decrease the probability of selecting the correct sense as the appreciated sense of ambiguous concept.
neutral
train_92885
Ideally, the NGD score of any term in the WSP of the correct sense is lower than that in the WSP of incorrect sense, thus we can simply use the arithmetic mean of the scores to evaluate the relatedness of WSP and the assertion.
for example, the concept airplane has only one sense: "a fix-wing aircraft", which is also the first sense of the concept plane.
neutral
train_92886
Therefore, if extending WordNet with the large amounts of semantic relations contained in ConceptNet, it is desirable to improve the performances of WordNet-based WSD methods.
for ambiguous concepts with multiple senses, there are multiple nodes in the network.
neutral
train_92887
Concept airplane has an atLocation relation with the concept airplane hangar, whereas plane has not such relation with airplane hangar.
we do not list the accuracies of combining other indirect resources with the direct ones).
neutral
train_92888
Li and Li (2004) proposed an approach based on bilingual bootstrapping which does not need parallel corpora and relies only on in-domain corpora from two languages.
amongst all the POS categories, the performance of our algorithm is lowest for verbs.
neutral
train_92889
These two steps are described in more details in the following sections.
(Afzal, 2009) observes a different hierarchy, DT > ME > NB, but on a different corpus and a different language, which makes the comparison difficult.
neutral
train_92890
As the PageRank strategy only relies on the graph structure without considering the weight of the edges, its highest ranked entities are those that are highly connected regardless of the weight of the edges.
two entities are associated with different earthquake events.
neutral
train_92891
As every slot is associated with an entity type, we need to select (when it is possible) one entity value for each slot.
we need to segment texts according to the events they refer to.
neutral
train_92892
QuestionBank does not provide function tags, and therefore in training and evaluation of the parsers, abstracted dependencies were extracted from the corpus.
the problem is more serious for imperatives.
neutral
train_92893
Table 6 gives the recall errors on labeled dependencies, which were observed more than ten times for 100 analysis sentences in each domain.
if these were embedded in another imperative or question, we only extracted the outermost one.
neutral
train_92894
(2002) proposed a probabilistic model for zero pronoun detection and resolution that used hand-crafted case frames.
for instance, there are thousands of named entities (NEs) that cannot intrinsically be covered.
neutral
train_92895
For example, although the dative argument of the predicate 'yaku (bake/burn)' is generally filled by a disk, such as a CD or DVD, it is often filled by a person, such as 'musuko (son),' if the accusative argument is filled by 'te (hands)' as in the example (i).
although most previous work has focused on zero anaphora in newspaper articles, 1 'Yaku' is the original form of 'yaiteiru.'
neutral
train_92896
In addition, we generalized case slot examples based on automatically acquired multi-word noun clusters.
since we basically focused on the zero anaphora in Web text that only adhered slightly to formal grammar, we did not give priority to exploring effective syntactic patterns.
neutral
train_92897
We were not able to obtain the sentence sets used in and Cholakov et al.
the naive one assigns the most frequent expanded type in the lexicon, count-noun-le f, to each unknown word.
neutral
train_92898
The methods are compared based on the effect their application has on the parsing coverage and accuracy of the GG grammar of German (Crysmann, 2003).
the average type precision and type recall for the 400 test words are given in table 5.
neutral
train_92899
(2008) can define more features in the created lexical entry due to the detailed tagset used.
the focus is primarily on improving the parsing coverage and accuracy of the grammar for a particular input.
neutral