id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
|---|---|---|---|
train_97100
|
A cross-language matching error resulted in the linking of "White House", and the reduced granularity of the contexts precluded further disambiguation.
|
the evaluation is dominated by the penalty for spurious system coreference chains.
|
neutral
|
train_97101
|
These differences are expected as the baselines cannot combine different text segments that describe the same event.
|
our results demonstrate the importance and effectiveness of global constraints for event record extraction.
|
neutral
|
train_97102
|
If the intersection between each pair of subsets, f i , f j ∈ F , had been empty, we could have found the MAP assignment by solving each potential separately.
|
the number of input words in a document is denoted by n. We define three types of potentials: • Field-labeling Potentials associate words in a document with field labels based on their local sentential context.
|
neutral
|
train_97103
|
We implement this by incorporating event-based features into the feature set of the field labeling CRF, while kipping the event segmentation CRF fixed.
|
as in the MUC-4 guidelines, we count pre-specified synonyms and morphological derivations of the same word only once.
|
neutral
|
train_97104
|
We noticed that the scope of a given reference often consists of units of higher granularity than words.
|
this allows us to use the popular classification agreement measure, the Kappa coefficient (Cohen, 1968).
|
neutral
|
train_97105
|
Average pretest and posttest scores in the corpus were 51.0% and 73.1% (out of 100%) with standard deviations of 14.5% and 13.8%, respectively.
|
we showed a significant partial correlation between the percentage of manual DISE labels and posttest controlled for pretest score.
|
neutral
|
train_97106
|
The correlation for the final model is 0.60 on all responses, which is significantly better than the individual models (0.48 for VSM, 0.45 for LSA, and 0.53 for PMI).
|
in this section we show the performance of the content features extracted using ASR hypotheses.
|
neutral
|
train_97107
|
We separate calls into quintile groups based on word counts.
|
other, Past, Sleep, Pronoun, Tentat, Cogmech, Insight, Humans / Comm, We, Incl, You, Preps, Number Familiar v. other other, Assent, Past, I, Leisure, Self, Insight / Fillers, Certain, Social, Posemo, We, Future, Affect, Incl, Comm, Achieve, School, You, optim, Job, occup receiver, voicemail, digit, representatives, Chrysler, ballots, staggering, refills, resented, classics, metro, represented, administer, transfers, reselling, recommendations, explanation, floral, exclusive, submit.
|
neutral
|
train_97108
|
The skip-chain CRF introduces loops and requires approximate in-ference, which motivates minimum risk training.
|
the other two problems, information extraction from semistructured text and collective multi-label classification, have been modeled with loopy CRFs before.
|
neutral
|
train_97109
|
Similar sentences w share subpaths in the FSA and cannot easily be disentangled.
|
we do have an efficient modification in which the window is centered on the word, by using an FST cq that delays the emission of a tag until up to 2 subsequent words have been seen.
|
neutral
|
train_97110
|
It is labeled with the special symbol Φ, which does not contribute to the word string accepted along a path.
|
the value of a actually depends on how one reaches a!
|
neutral
|
train_97111
|
And researchers have grown increasingly concerned that automatic metrics have a strong bias towards preferring statistical translation outputs; the NIST (2008NIST ( , 2010, MATR (Gao et al., 2010) and WMT (Callison-Burch et al., 2011) evaluations held during the last five years have provided ample evidence that automatic metrics yield results that are inconsistent with human evaluations when comparing statistical, rule-based, and human outputs.
|
job candidates typically take a written test in which they are asked to translate four passages (i.e., paragraphs) of increasing difficulty into English.
|
neutral
|
train_97112
|
On the surface, it does not match its counterpart, 焉 yan, the fourth character in the first line, since yan is a sentence particle and zhe is a noun.
|
for each character, the tone is noted as either level (ping 平) or oblique (ze 仄).
|
neutral
|
train_97113
|
Second, our treebank is expected to be used pedagogically, and we expect explicit grammatical relations between words to be helpful to students.
|
in Classical Chinese, the preposition is frequently omitted, with the bare locative noun phrase modifying the verb directly.
|
neutral
|
train_97114
|
Another key contribution was to unify two, previously incompatible, large student response corpora under this common annotation scheme.
|
this approach is significantly limited by the requirement that the domain be small enough to allow comprehensive knowledge engineering, and it is very labor-intensive even for small domains.
|
neutral
|
train_97115
|
The values are computed for each group separately; Table 1 shows the averages across five groups.
|
overall, glancing at these segmentations suggests that there is a prominent topical shift between paragraphs 9-11, three significant ones (after 2, 10 and 12) and several minor fluctuations (after 3 and possibly after 10 and 11).
|
neutral
|
train_97116
|
Due to the excessively large window there will almost always be a boundary where fine-grained annotations are concerned, but those boundaries will not correspond to the same phenomena.
|
if, however, near-hits are considered, suggested segment boundaries can be ranked by their prominence using the information about how many people include each boundary in their annotation.
|
neutral
|
train_97117
|
By exponentiating and normalizing score(x, y, h; θ), we obtain a conditional log-linear model, which is useful for training criteria with probabilistic interpretations: The log loss then defines loss i) ).
|
3 most training procedures periodically invoke the decoder to generate new k-best lists, which are then typically merged with those from previous training iterations.
|
neutral
|
train_97118
|
For large comparable texts, we use contextual clues, namely translations of neighboring words, to constrain TM and to preserve precision.
|
such cases adversely affect precision.
|
neutral
|
train_97119
|
of MBOT over XTOP R are described by Maletti (2011a).
|
this standard approach due to Engelfriet (1975) is used in many similar constructions.
|
neutral
|
train_97120
|
For an efficient representation and efficient modification algorithms (such a k-best extraction) we would like L to be regular.
|
maletti (2010a) shows that we can construct an XTOP R m such that for every t ∈ T Σ and u ∈ T ∆ .
|
neutral
|
train_97121
|
So, it is interesting to automate the process that produces relevant titles by extracting them from texts, and supplying other applications with such data, while avoiding any human intervention: Direct applications (as automatic titling of "no object" e-mails) are thus possible.
|
• POSTIT: Based on the extraction of noun phrases to propose them as titles.
|
neutral
|
train_97122
|
Applied to a corpus of journalistic articles, CATIT was able to provide headings both informative and catchy.
|
so, if in the treated constituent a single proper noun appears (easily locatable by the presence of a capital letter), the common noun can be put in connection with the nominalized past participle (without concluding that this common noun is an agent of the nominalized verb).
|
neutral
|
train_97123
|
The POS features abstract away from the words and avoid the problem of data sparseness by allowing the classifier to focus on the categories of the words, rather than the lexical items themselves.
|
while there are certainly rule-based decision points for comma insertion (Doran, 1998), particularly in the case of commas that set off significant chunks or phrases within sentences, there are also some commas that appear to be more prescriptive, as they have less of an effect on sentence processing (such as in example (2) in the introduction), and opposing usage rules for the same contexts are attested in different style manuals.
|
neutral
|
train_97124
|
Hopefully, this car will last for a while.
|
a shortcoming of this methodology is that it dictates that all commas are missing, but these parses were generated with comma information present in the sentence and moreover handcorrected by human annotators.
|
neutral
|
train_97125
|
The base release of Chinese CCGbank, corpus A, like (2-a), makes the distinction between categories LCP and NP.
|
we also report labelled sentence accuracy (Lsa), the proportion of sentences for which the parser returned all and only the gold standard dependencies.
|
neutral
|
train_97126
|
For example, in the USE-POS run, according receives the incorrect supertag P VP, leading to an incorrect structure, while in the USE-SUPER run, it is able to use P PP, leading to the correct structure.
|
any word with the POS tag JJ, JJR, or JJS, receives the supertag P aDJP, and so on.
|
neutral
|
train_97127
|
The benefit of this approach can only be seen when this line of work is extended to experiments with parsing and Arabic conversion.
|
the dollar sign is treated as a displaced word and is therefore not counted, in a QP constituent, as a token for purposes of the "single token" rule.
|
neutral
|
train_97128
|
Improvement with agr is roughly uniform across all dataset sizes; this was the general trend for all treebanks.
|
in synthetic languages, words which are syntactically related in certain ways must agree: e.g., subject-verb agreement for gender or determiner-noun agreement for case (Corbett, 2006).
|
neutral
|
train_97129
|
In this paper we present a new corpus of tweets with labeled events by taking a very similar approach to that taken by NIST when creating the TDT corpora.
|
adding 20% of random pairs performs substantially worse than the original corpus.
|
neutral
|
train_97130
|
(2010)), introducing not only a bias towards highvolume events, but also a bias toward the kinds of events that their system can detect, and ii) the authors only considered tweets by users who set their location to New York, which introduces a strong bias towards the type of events that can appear in the corpus.
|
changing the precision has a bigger impact on the results.
|
neutral
|
train_97131
|
We were also able to report a slight gain by adding the models to a very strong setup with discriminative word lexicons, triplet lexicon models and a discriminative reordering model.
|
we give phrase-level scoring functions for the four features.
|
neutral
|
train_97132
|
In one of our strongest setups, which includes discriminative word lexicon models (DWL), triplet lexicon models and a discriminative reordering model (discrim.
|
the positive impact of the models was mainly noticeable when we exclusively applied lexical smoothing with word lexicons which are simply extracted from word-aligned training data, which is however the standard technique in most state-ofthe-art systems.
|
neutral
|
train_97133
|
More and more language workers and learners use the MT systems on the Web for information gathering and language learning.
|
no previous work in CAT uses Google Translate for comparison.
|
neutral
|
train_97134
|
To the best of our knowledge our proposed task is unexplored in previous work.
|
this tool has two components devoted to (1) isolation of individual corrections in a sentence pair, and (2) classification of these corrections.
|
neutral
|
train_97135
|
WinPR is easy to implement and provides more detail on the types of errors in a computed segmentation, as compared with the reference.
|
2008 WinPR counts boundaries, not windows, which has analytical benefits, but WindowDiff's counting of windows provides an evaluation of segmentation by region.
|
neutral
|
train_97136
|
A similar Maximum Entropy model is presented by Chen and , where features are the presence or absence of a given character n-gram in w. In our approach, feature functions are defined at character positions rather than over the entire word.
|
method A uses only language-specific models: where Pr(φ|ḡ, l) is estimated by model m l .
|
neutral
|
train_97137
|
The morphotactics causes, amongst other phenomena, the final consonant of a morpheme to assimilate the manner of the initial consonant of the following morpheme (as in -villi), or to be dropped (as in natsiviniq-).
|
the test set was composed of 75 of these sentences (about 2K English tokens, 800 Inuktitut tokens, 293 gold-standard sure alignments, and 1679 probable), which we use to evaluate word alignments.
|
neutral
|
train_97138
|
This uses the top output of the morphological analyser as the oracle segmentation of each Inuktitut token.
|
morphemes are not readily accessible from the realised surface form, thereby motivating the use of a morphological analyser.
|
neutral
|
train_97139
|
As expected, the predicted ASR accuracy increases as EEG classification accuracy increases, for both groups (adults and children) and both levels of difficulty (easy and difficult).
|
as expected, the predicted aSR accuracy increases as EEG classification accuracy increases, for both groups (adults and children) and both levels of difficulty (easy and difficult).
|
neutral
|
train_97140
|
Experiments are performed using srilm (Stolcke, 2002), in particular the Kneser-Ney (KN) and generic class model implementations.
|
we found that θ has a considerable influence on the performance of the model and that optimal values vary from language to language.
|
neutral
|
train_97141
|
Context features bind an output symbol with input n-grams in a focus window centred around the input-output alignment; the input n-grams represent the context in which the output character is generated.
|
the number of common elements with the English-Japanese transliteration corpus was 6,288 for Combilex and 2,351 for CELEX; in total, there were 6,384 transliteration entries for which at least SEQUItUR DIRECtL+ Acc.
|
neutral
|
train_97142
|
Performing this test with DI-RECTL+ as the base system shows good error rate reduction on names (about 12%) as reported, but a much smaller statistically insignificant error rate re-duction on core vocabulary words (around 2%).
|
for example, IPA transcriptions could be mined from Wikipedia despite the fact that different transcriptions may have been written by different people.
|
neutral
|
train_97143
|
Second, we would like to be able to incorporate general supplemental information rather than being limited by the existence of relevant data.
|
intuitively, if two strings contain symbols or n-grams that often co-occur in the training data, their alignment score will be higher.
|
neutral
|
train_97144
|
Excepting languages with highly transparent orthographies, the number of letter-to-sound rules appears to grow geometrically with the lexicon size, with no asymptotic limit (Kominek and Black, 2006).
|
it achieves only 64.8% word accuracy, which is lower than any of the results in Table 4.
|
neutral
|
train_97145
|
Types is the number of word types in each corpus, True is the number of gold tags and Induced reports the median number of tags induced by the model together with standard deviation.
|
we found that the emission probability component in log-scale is roughly four times smaller than the transition probability.
|
neutral
|
train_97146
|
This sum of hinge-losses is 0 only if each pair is separated by a model score of 1.
|
this architecture is desirable, as most groups have infrastructure to k-best decode their tuning sets in parallel.
|
neutral
|
train_97147
|
Without the complications added by hope decoding and a time-dependent cost function, unmodified MIRA can be shown to be carrying out dual coordinate descent for an SVM training objective (Martins et al., 2010).
|
this requires a sentence-level approximation of BLEU, which we re-encode into a cost ∆ i (e) on derivations, where a high cost indicates that e receives a low BLEU score.
|
neutral
|
train_97148
|
In most statistical machine translation systems, the input source text is translated in entirety, i.e., the search for the optimal target string is constrained on the knowledge of the entire source string.
|
the amount of training data for the English acoustic model is around 900 hours of speech, while the data for training the Spanish is approximately half that of the English model.
|
neutral
|
train_97149
|
Minimum error rate training (MERT) was performed on a development set to optimize the feature weights of the log-linear model used in translation.
|
in applications such as language learning and real-time speech-to-speech translation, incrementally translating the source text or speech can provide seamless communication and understanding with low latency.
|
neutral
|
train_97150
|
Note that revealing novel positive implicit meaning is not always possible, e.g., statement (4).
|
the work presented here effectively extracts specific positive implicit meaning from negated statements.
|
neutral
|
train_97151
|
Statement (3) implicitly states that it is growing fast enough for other parties.
|
in a second round, sentences were annotated following the improved guidelines (Section 4.1).
|
neutral
|
train_97152
|
The best model obtains an f-measure of 68.84, calculated by considering exact matches between chunks.
|
we choose enough [interpretation: it is growing insufficiently fast for us] since it reveals novel positive meaning [criterion 5a].
|
neutral
|
train_97153
|
(1992) whose method induces binary trees.
|
each such node is the root of a densely-connected subtree; each such subtree is then assumed to represent a single discrete cluster of related items, whereθ = mean(θ) (illustrated in Figure 1c).
|
neutral
|
train_97154
|
Given previous work on word clusters for various linguistic structure prediction tasks, these results are not too surprising.
|
here, on average, the best clusters provide a 24% relative error reduction on the test set (75.8 vs. 68.1 F 1 ).
|
neutral
|
train_97155
|
Thus, receiving feedback regarding a single word in a sen-tence equals to about 1/18 ≈ 5.5% of the information provided by a fully annotated sentence.
|
yet, the annotator manually provides the correct parse, if it is not found within the proposed alternatives.
|
neutral
|
train_97156
|
As mentioned before, we found empirically that this arises in only ∼5% of the instances.
|
the parse it would output for x and the best alternative.
|
neutral
|
train_97157
|
On further analysis, we find that even though the dialog act tagger has a high accuracy (85.8% in our cross validation), it obtained a very low recall of 28.6% and precision of 47.6% for the RequestAction dialog act.
|
we instead use the DA tagger of Hu et al.
|
neutral
|
train_97158
|
We address this issue, by adding a monotonic concatenation (called X-glue) rule that concatenates a series of hierarchical rules.
|
shallow-n grammars require parameters which cannot be directly optimized using minimum error-rate tuning by the decoder.
|
neutral
|
train_97159
|
Intuitively, we can imagine a bookshelf in which books are sorted by their reading levels.
|
in order to evaluate the benefits of using visual cues to assess reading levels, we repeat the experiments using SVM rank based on our proposed ranking book leveling algorithm with only the visual features or only surface features.
|
neutral
|
train_97160
|
As we have found in data analysis, it is frequently the case that a topic dominates within a sampling unit (sentence), and that units from the same segment frequently are dominated by the same topic.
|
to generate a document d the topic proportions are drawn using a Dirichlet distribution with hyperparameter α.
|
neutral
|
train_97161
|
A comparison with a bilingual topic model created from parallel data would also prove interesting.
|
the novelty of our work is the transformation of a source language topic model rather than the creation of a language independent model from parallel data.
|
neutral
|
train_97162
|
Methods exist for aligning parallel corpora, and extracted parallel segments can be used to, for example, augment machine translation phrase tables, but the amount of genuinely parallel data is limited.
|
by the design of the experiment, an article about the same subject has to exist in both languages, and therefore the low recall value is surprising.
|
neutral
|
train_97163
|
Our system can be applied to any pair of languages for which there is a dictionary.
|
parallel segments can also be extracted from comparable corpora (a comparable corpus is one which contains similar texts in more than one language).
|
neutral
|
train_97164
|
We adapt methods proposed by Mejer and Crammer (2010) in order to produce per-edge confidence estimations in the prediction.
|
we trained a model per dataset and used it to parse the test set.
|
neutral
|
train_97165
|
IBM Model 2 involves a slight change to Model 1 in which the probability of a word link depends on the word positions.
|
if the tag transition probabilities p(y j | y j−1 ) are all constants and also do not depend on the previous tag y j−1 , then we can rewrite Eq.
|
neutral
|
train_97166
|
The ngram features (row III), on the other hand, lead to scores lower than the random baseline.
|
1 It was created by manually identifying the emotions of a few seed words and then labeling all their WordNet synonyms with the same emotion.
|
neutral
|
train_97167
|
the speaker's expertise, or agreement relations between speakers.
|
debates in our corpus vary greatly by topic on two dialogic factors: (1) the percent of posts that are rebuttals to prior posts, and (2) the number of Stance Utterance P1 CON 69 people have been released from death row since 1973 these people could have been killed if there cases and evidence did not come up rong also these people can have lost 20 years or more to a false coviction.
|
neutral
|
train_97168
|
In this paper, we address the problem of context-enhanced citation sentiment detection.
|
as is evident from Table 1, including the 4 sentence window around the citation more than doubles the instances of subjective sentiment, and in the case of negative sentiment, this proportion rises to 3.
|
neutral
|
train_97169
|
(2011) targeted the prediction of conversations and their length and Suh et al.
|
the task of predicting responses in social networks has been investigated previously: Hong et al.
|
neutral
|
train_97170
|
After learning a model and testing it, we removed the feature family that was overall most highly ranked by MART (i.e., was used in high-level splits in the decision trees) and learned a new model.
|
our focus is on predicting events within the network.
|
neutral
|
train_97171
|
The Oxford Dictionary of First Names (Hanks et al., 2007), for instance, presents a comprehensive description of origins and common uses of most nicknames in modern English.
|
this process should turn billions of input records into a few hundred million clusters of records (or profiles), where each cluster is uniquely associated with a real-world unique individual.
|
neutral
|
train_97172
|
Web-based VSMs leverage Web search results to form a vector of each query (Sahami and Heilman, 2006).
|
it is unrealistic to assume any of the methods can correlate perfectly to the mean human judgement scores.
|
neutral
|
train_97173
|
Bannard and Callison-Burch (2005) applied the idea that French phrases aligned to the same English phrase are paraphrases in a system that induces paraphrases by pivoting through aligned foreign phrases.
|
we first asked Turkers to re-annotate a sample of existing goldstandard data.
|
neutral
|
train_97174
|
While this paper has focused on instruction generation, the hierarchical approach in our learning framework helps to scale up to larger NLG tasks, such as text or paragraph generation.
|
we compare the user satisfaction and naturalness of surface realisation using Hidden Markov Models (HMMs) and Bayesian Networks (BNs) which both have been suggested as generation spaces-spaces of surface form variants for a semantic conceptwithin joint NLG systems (Dethlefs and Cuayáhuitl, 2011a;Dethlefs and Cuayáhuitl, 2011b) and in isolation (Georgila et al., 2002;Mairesse et al., 2010).
|
neutral
|
train_97175
|
While BNs also place independence assumptions on their variables, they usually overcome the problem of lacking context-awareness by their dependencies across random variables.
|
regarding their application to surface realisation, we can argue that while BNs are the best performing model in isolation, HMMs represent a cheap and scalable alternative especially for largescale problems in a joint NLG system.
|
neutral
|
train_97176
|
BNs are thus able to compute the posterior probability of a surface form based on all relevant properties of the current situation (not just the occurrence in a corpus).
|
the random variables of BNs allow them to keep a structured model of the space, user, and relevant content selection and utterance planning choices.
|
neutral
|
train_97177
|
As FB ran into problems both in terms of execution time and failure rates, we omitted it from the large scale experiments.
|
the algorithm terminates when C is empty.
|
neutral
|
train_97178
|
The information found in microblogs is difficult to find anywhere else, including news and Web archives, thereby making it a valuable resource for a wide variety of users.
|
the correlations indicate a strong correlation with event popularity for the keyword approach.
|
neutral
|
train_97179
|
While it is common practice to smooth P (w|T S i ) using Dirichlet (or Bayesian) smoothing (Zhai and Lafferty, 2004), it is less common to smooth the general English language model P (w).
|
preliminary experiments showed that the arithmetic mean was susceptible to overweighting terms that had a very large burstiness score in a single timespan.
|
neutral
|
train_97180
|
For example, there are more weddings than earthquakes.
|
energy-related event queries, such as "blackout", achieves very poor effectiveness.
|
neutral
|
train_97181
|
It shows that even simple features and off-the-shelf predicted as Tease Not Tease 52 47 Not 26 559 Table 4: Confusion Matrix of Teasing Classification classifier can detect some signal in the text.
|
for example, the distance between "vinny" and its relevant verb "tell" is 1.
|
neutral
|
train_97182
|
Similarly, an inert label switched to flow will require all of the ancestors of that node to switch to flow as in 2(d).
|
2 In the remainder of this section we describe the necessary steps to create a training corpus for finegrained sentiment analysis.
|
neutral
|
train_97183
|
There are also a number of mentioned concepts that could serve as the topic of an opinion in the sentence, or target.
|
we construct a trie for each appearance of any of these possible target terms.
|
neutral
|
train_97184
|
Following Das and Petrov (2011) and Subramanya et al.
|
all the graph-based models are better than the supervised baseline; for our objectives using pairwise Gaussian fields with sparse unary penalties, the accuracies are equal or better with respect to NGF-2 ; however, the lexicon sizes are reduced by a few hundred to a few thousand entries.
|
neutral
|
train_97185
|
It is empirically observed that contextualized word types can assume very few (most often, one) POS tags.
|
all the graph-based models are better than the supervised baseline; for our objectives using pairwise Gaussian fields with sparse unary penalties, the accuracies are equal or better with respect to NGF-2 ; however, the lexicon sizes are reduced by a few hundred to a few thousand entries.
|
neutral
|
train_97186
|
The best result for most cases is obtained at γ somewhere between 0 (hard EM) and 1 (EM).
|
eM (γ = 1.0) as given by where Acc represents the accuracy as evaluated on the ambiguous words of the given data.
|
neutral
|
train_97187
|
Suppose our task is to predict two output variables h 1 and h 2 coupled via linear constraints.
|
in order to better understand different UEM variations, we write the UEM E-step (6) explicitly as an optimization problem: CoDL (Chang et al., 2007) (NEW) EM with Lin.
|
neutral
|
train_97188
|
if the pair (e 1 , e 2 ) has relation type LOCATED IN then e 2 must have entity type LOC.
|
if computing the posterior and expected values of linear functions over each subcomponent is easy, then the algorithm works efficiently.
|
neutral
|
train_97189
|
Each of those approaches involve solving a similar generalized eigenvalue problem (Eq.
|
unlike the max margin formulations of SVM, it is not easy to rewrite the parameters a, b in terms of the Lagrangian multipliers α ij as C α xy itself depends on α ij 's.
|
neutral
|
train_97190
|
To improve its autocorrection performance, it is important for the system to have the capability to assess its own performance and learn from its mistakes.
|
these phenomena are captured in the word repetition and only word features.
|
neutral
|
train_97191
|
Traditional spell checking systems generally assume that misspellings are unintentional.
|
this paper describes a novel problem of assessing its own correction performance for an autocorrection system based on dialogue between two text messaging users.
|
neutral
|
train_97192
|
To train the classifier, we use SVM light (Joachims, 1999).
|
first, it extracts all the syntactic heads (i.e., the word tokens whose gold dependency labels are SUBJ, PRED, or GMOD).
|
neutral
|
train_97193
|
Note that the use of a supervised resolver like Reconcile does not render our approach supervised, since we can replace it with any resolver, be it supervised, heuristic, or unsupervised.
|
this suggests that the classifiers being trained in Setting 3 have enabled the discovery of additional coreference links.
|
neutral
|
train_97194
|
The ability to associate medical concepts with temporal expressions helps order medical concepts and determine potential temporal overlap between them.
|
the time-bin along with features extracted based on explicit temporal expressions co-occurring with the medical concepts indicate a coreference between the pair of medical concepts.
|
neutral
|
train_97195
|
Under this assumption, we can regularize the models from each view by constraining the amount by which we permit them to disagree on unlabeled instances.
|
we believe there are no natural dependencies between the semantic and temporal feature sets.
|
neutral
|
train_97196
|
This grammar represents a set of trees which we encode compactly using a weighted hypergraph (or packed forest), a data structure that defines a probability (or weight) for each tree.
|
we first transform the CYK parser and our grammar into a hypergraph and then compute the weights using inside-outside.
|
neutral
|
train_97197
|
Moreover, we assume that each target word t ∈ T s has a set of senses in common with s. These senses may also be shared among different target words.
|
moreover, we assume that each target word t ∈ T s has a set of senses in common with s. These senses may also be shared among different target words.
|
neutral
|
train_97198
|
Bilingual features were computed from 0.78 (S→E) and 1.04 (J→E) billion tokens of parallel text, primarily extracted from the Web using automated parallel document identification (Uszkoreit et al., 2010).
|
much prior work has explored the related task of monolingual word and phrase clustering.
|
neutral
|
train_97199
|
We therefore reduced the stack size in the phrase-based decoder so that it runs in the same amount of time as the cept-based decoder.
|
the drawback of the phrase-based model is the phrasal independence assumption, spurious ambiguity in segmentation and a weak mechanism to handle non-local reorderings.
|
neutral
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.