id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_93600
As a prominent example, we consider Section 5 of the ACE-2005 event annotation guidelines, 2 which provides a description for each event type.
to score the binary label assignment (x i , y i ), we use a small set of features that assess the similarity between x i and t's given seed list.
neutral
train_93601
To assess our approach, we compare it (Section 4) with the common fully-supervised approach, which requires annotated triggers for each target event type.
as we match performance of a state-of-the-art fully-supervised system over the aCE-2005 benchmark (and even surpass it), we offer our approach as an appealing way of reducing annotation effort while preserving result quality.
neutral
train_93602
One core is 2.4 times as fast at language identification and 1.8 to 6 times as fast at part-of-speech language modeling.
we tested the original Python, a Java implementation that "should be faster than anything else out there" (weiss, 2013), a C implementation (Lui, 2014), and our replica in hardware.
neutral
train_93603
For each studied language we use Stanford CoreNLP for EN and ZH, and TreeTagger (Schmid, 1994) for ES to produce Table 2: F-score Results the tokens and the POS tags.
instead, we seek a direct transfer approach ( Figure 1) to cross-lingual NER (also classified as transductive transfer learning (Pan and Yang, 2010) and closely related to domain adaptation).
neutral
train_93604
Our aim is to increase the type-level coverage of FN.
despite many years of work, most of the words that one confronts in naturally occurring text do not appear at all in FN.
neutral
train_93605
In total, IWNLP enhances the results of Mate Tools in five of the six test cases.
to reduce stress from the servers and to easily reproduce our parsing results, we parse the latest of the monthly XML dumps 4 from Wiktionary.
neutral
train_93606
For SVM, we employ the radial basis function kernel (RBF) and we use the wrapper provided by Weka for LibSVM (Chang and Lin, 2011).
many words have undergone transformations by the augmentation of language-specific diacritics when entering a new language (Ciobanu and Dinu, 2014a).
neutral
train_93607
(2015), we further appended the original messages (as if parroted back).
to ensure that the denominator is never zero, we assume that, for each i there exists at least one reference r i,j whose weight w i,j is strictly positive.
neutral
train_93608
Although distributional word vector dimensions cannot, in general, be identified with linguistic properties, it has been shown that some vector construction strategies yield dimensions that are relatively more interpretable (Murphy et al., 2012;Fyshe et al., 2014;Fyshe et al., 2015;Faruqui et al., 2015b).
table 1 shows the size of vocabulary and number of features induced from every lexicon.
neutral
train_93609
Every word in the lexicon is associated with these properties.
such analysis is difficult to generalize across models of representation.
neutral
train_93610
It also shows that our late fusion methods are more effective than our early fusion method.
a part of the work for improving such thesauri focuses on the filtering of the components of the distributional contexts of words (Padró et al., 2014;Polajnar and Clark, 2014) or their reweighting, either by turning the weights of these components into ranks (Broda et al., 2009) or by adapting them through a bootstrapping method from the thesaurus to improve (Zhitomirsky-Geffet and Dagan, 2009;Yamamoto and asakura, 2010).
neutral
train_93611
We use part-of-speech information and dependency arcs from the gold annotation to extract noun phrases containing adjectives.
these values are different from the dependency length differences for noun phrases without a right dependent (panel a and b).
neutral
train_93612
In the light of recent research (Volkova et al., 2013;Hovy, 2015;Jørgensen et al., 2015), we explore the hypothesis that these biases transfer to NLP tools induced from these resources.
we then map each review to the NUTS region corresponding to the location.
neutral
train_93613
In all of these cases, vocabulary does not factor into the differences, since we are at the POS level.
these models perform better on texts written by certain people, namely those whose language is closer to the training data.
neutral
train_93614
Probabilistic topic models (Blei et al., 2003) are broadly used to uncover the hidden topics of tweets, since the low-dimensional semantic representation is crucial for many applications, such as product recommendation (Zhao et al., 2014), hashtag recommendation (Ma et al., 2014), user interest tracking (Sasaki et al., 2014), sentiment analysis (Si et al., 2013).
the hyperparameters are set as follows: for all the models, we set α = 50/K, β = 0.01; for twitter-LDA, twitterUB-LDA and twitter-BtM, we set γ = 0.5.
neutral
train_93615
The vital segments for two classes are used to train the supervised classifier which can best classify each sentence to the correct author's class.
the co-authored documents include Web pages, books, academic papers and blog posts.
neutral
train_93616
Most of the models above are limited to model linear topical dependencies between words, word topical dependencies can also be modeled by a non-linear way.
and we define the topic model based on EGRF as Extended Global Topic Random Field(EGTRF).
neutral
train_93617
In practice, because of the large vocabulary size N , designing such a matrix is computationally prohibitive.
we count the term P [w i |s j ] only once when computing the likelihood of the sentence.
neutral
train_93618
Related also is the extensive work done in spatio-temporal modelling of meme spread.
the likelihood (1) is commonly approximated by considering subregions of t and assuming constant intensities in sub-regions of t (Møller and Syversveen, 1998;Vanhatalo et al., 2013) to overcome computational difficulties arising due to integration.
neutral
train_93619
One example is application of Hawkes processes (Yang and Zha, 2013), a probabilistic framework for modelling self-excitatory phenomena.
the likelihood (1) is commonly approximated by considering subregions of t and assuming constant intensities in sub-regions of t (Møller and Syversveen, 1998;Vanhatalo et al., 2013) to overcome computational difficulties arising due to integration.
neutral
train_93620
In terms of performing distributed representation learning for output variables, our proposed model shares similarity with the structured output representation learning approach developed by Srikumar and Manning (2014), which extends the structured support vector machines to simultaneously learn the prediction model and the distributed representations of the output labels.
, s * T } using the standard Viterbi algorithm (Rabiner and Juang, 1986) for each labeled source training sentence and each target test sentence.
neutral
train_93621
Indeed, the marginal productivity gains observed with QE at a global level become statistically significant in specific conditions, depending on the length (between 5 and 20 words) of the source sentences and the quality (0.2<HTER≤0.5) of the proposed MT suggestions (our third contribution).
to check this hypothesis we compared the HtER scores obtained in the two conditions (colored vs. grey flags), assuming that noticeable differences would be evidence of unwanted psychological effects.
neutral
train_93622
In this work, we use ReLu (Dahl et al., 2013) as the activation function; • w (1,j) is the parameters for the jth convolution unit on Layer-1, with matrix •ĉ i (0) is a vector constructed by concatenating word vectors in the k-sized sliding widow i; • b (1,j) is a bias term, with vector To distinguish the phrase pair from its context, we use one additional dimension in word embeddings: 1 for words in the phrase pair and 0 for the others.
they both extend the neural network joint model (NNJM) of Devlin et al.
neutral
train_93623
As more sub-models for longer distance reordering being integrated, the translation performance improved consistently, though the improvement leveled off quickly.
to select appropriate rules, more effective criteria are required.
neutral
train_93624
A translation quality gap between genres has also been observed in past OpenMT evaluation campaigns.
next, while spelling errors are common in all topics, its abundance is most prominent in culture.
neutral
train_93625
The idea is to count those words co-occurring with k as the context of j, where k ∈ V l 2 is the translationally equivalent word of j ∈ V l 1 .
the monolingual objective stems from the distributional hypothesis (Harris, 1954) and optimizes words in similar contexts into similar embeddings.
neutral
train_93626
Year Per-track 69.77% 58.58% Per-album 70.60% 57.15% Table 2: Initial experiment varying class and instance granularity For the follow-up experiment, we focus on the task of classifying at the per-album level of granularity, as ultimately this is the level at which the original annotations are obtained.
an instance is represented by a vector with four real values.
neutral
train_93627
M 2 was assessed by comparing its output against that of the official Helping Our Own (HOO) scorer (Dale and Kilgarriff, 2011), itself based on the GNU wdiff utility.
despite these many proposed metrics, no prior work has attempted to answer these questions by comparing them to human judgments.
neutral
train_93628
As demonstrated in previous work, this numeric representation of words has led to big improvements in many NLP tasks such as machine translation (Sutskever et al., 2014), question answering (Iyyer et al., 2014) and document ranking (Shen et al., 2014).
in between is the pronunciation of the sentence in Chinese, called PinYin, which is a form of Romanian phonetic representation of Chinese, similar to the international Phonetic Alphabet (iPA) for English.
neutral
train_93629
These four radicals altogether convey the meaning that "the moment when sun arises from the grass while the moon wanes away", which is exactly "morning".
web search leverages many kinds of ranking signals, an important one of which is the preference signals extracted from click-through logs.
neutral
train_93630
Using parse tree patterns to judge the grammaticality of a sentence is not new.
fragment was among the 28 error types introduced in the CoNLL-2014 shared task (Ng et al., 2014), but the test set used in the task only contained 16 such errors and is too small for our purpose.
neutral
train_93631
For example, if X is Herman Melville's Moby-Dick, W can be Melville's complete works.
they are not directly used to resolve unigram sparsity.
neutral
train_93632
Function ϕ d measures the textual similarity between document d and the document d 0 to smooth.
we start by reviewing previous works.
neutral
train_93633
The basic idea is to smooth a document language model with the documents similar to the document under consideration through clustering (Liu and Croft, 2004;Tao et al., 2006).
with these works, we have proposed a language model smoothing framework which incorporates social factors as a regularizer.
neutral
train_93634
These models broadly fall into two categories: text-based and network-based methods.
the text-based profile locations are noisy and only 1-3% of tweets are geotagged (Cheng et al., 2010;Morstatter et al., 2013b), meaning that geolocation needs to be inferred from other information sources such as the tweet text and network relationships.
neutral
train_93635
Finally, training a predictor for the number of keywords yields further improvements (row 3) over a simple ratio of the number of input words.
the TF-IDF across different methods remains a strong unsupervised baseline (Hasan and Ng, 2010).
neutral
train_93636
The system extracts a list of candidate keywords from a document and trains a decision tree over a large set of hand engineered features, also including TF-IDF, in order to predict the correct keywords on the training set.
these systems are generally limited as these need supervision and cannot scale to new data or data in other languages.
neutral
train_93637
It has also been used as a feature to detect irony (Reyes et al., 2013).
we chose 184 topics split into 9 categories (politics, sport, etc.).
neutral
train_93638
Table 2 presents the result in terms of precision (P), recall (R), macro-averaged F-score (MAF) and accuracy (A).
having it alone is not enough to find ironic instances.
neutral
train_93639
Direct visual impression is that different users use different typing patterns to input Chinese.
singlesegNum(ssN) stands for the number of the segments whose length equals 1 in a sentence .
neutral
train_93640
Among them, and improved tense prediction by using eventuality and modality labels.
着 (zhe), 了 (le), 过 (guo)) and time expressions (e.g., 昨天 (yesterday)).
neutral
train_93641
• Local predictor with basic features (Local(b)) • Local predictor with basic features + dependency parsing features (Local(b+p)) • Local predictor with basic features + dependency parsing features + linguistic knowledge features (Local(b+p+l)) • Local predictor + all features introduced in Section 2.1 (Local(all)) • Conditional Random Fields (CRFs): We model a conversation as a sequence of sentences and predict tense using CRFs (Lafferty et al., 2001).
it is much more challenging to predict tense in Chinese conversations and there has not been an effective set of rules to predict Chinese tense so far due to the complexity of language-specific phenomena.
neutral
train_93642
Since most of the soft headers on Wiktionary have a distinct background color from the rest of their containing tables, we initially added a rule that treated content cells that defined a background color in HTML or inline CSS as header cells.
each content cell is matched with the headers immediately up the column, to the left of the row, and in the "corners" located at the row and column intersection of the previous two types of headers.
neutral
train_93643
This validity still holds when two EDUs are connected with a symmetric relation such as joint.
there is a number of recent studies employing RSt features for passage re-ranking under question answering (Joty and Moschitti, 2014;Surdeanu et al., 2014).
neutral
train_93644
The results clearly show that the thread-level features are important, providing consistent improvement for all our learning models.
in the second example (Q u 4 ), the first two comments are classified as bad when using the basic features.
neutral
train_93645
Our local classifiers are support vector machines (SVM) with C = 1 (Joachims, 1999), logistic regression with a Gaussian prior with variance 10, and logistic ordinal regression (McCullagh, 1980).
we also included another real-valued feature: max(20, i)/20, where i represents the position of the comment in the thread.
neutral
train_93646
The goal of the feed forward processing is to calculate the similarity between a pair of questions (q 1 , q 2 ).
after producing the BOW and CNN representations for the two input questions, the BOW-CNN computes two partial similarity scores s bow (q 1 , q 2 ), for the CNN representations, and s conv (q 1 , q 2 ), for the BOW representations.
neutral
train_93647
1 displays top 10 semantically similar words monolingually, across-languages and combined/multilingually for one ES, IT and NL word.
the results clearly reveal that, although both other BWE models critically rely on parallel Europarl data for training, and Gouws et al.
neutral
train_93648
Exp II: Shuffling and Window Size Since our BWESG model relies on the pre-training random shuffling procedure, we also test whether the shuffling has significant or rather minor impact on the induction of BWEs and final BLI scores.
the architecture of our BWE Skip-Gram model for learning bilingual word embeddings from document-aligned comparable data.
neutral
train_93649
The low-rank tensor models are also at least twice as fast to train as the full tensors: on a single core, training a rank-1 tensor takes about 5 seconds for each verb on average, ranks 5-50 each take between 1 and 2 minutes, and the full tensors each take about 4 minutes.
we filter the SVO triples to a set containing 345 distinct verbs: the verbs from our test datasets, along with some additional high-frequency verbs included to produce more representative sentence spaces.
neutral
train_93650
When pattern X is allowed, both m a and m b are not directly associated with any natural language word, so we are able to further insert arbitrarily many (compatible) semantic units between the two units m a and m b while the resulting relaxed hybrid tree remains valid.
results showed that our system consistently yielded higher results than all the previous systems, including our state-of-the-art relaxed hybrid tree system (the full model, when all the features are used), in terms of both accuracy score and F 1measure.
neutral
train_93651
In early experiments, Naive Bayes performed comparably to or outperformed SVM because the dimensionality of the feature space was relatively low.
overall, we achieve our best results when including both precedent and subsequent context along with the question in our feature space.
neutral
train_93652
In this approach, they also chose to use a compressed feature set for n-grams and POS n-grams.
we used unigrams, bigrams, POS bigrams, and POS trigrams of a question and its immediately preceding and following context as feature sets.
neutral
train_93653
A similar incongruity is observed in the sarcastic 'My tooth hurts!
consider the example "I love this paper so much that I made a doggy bag out of it".
neutral
train_93654
Note that the 'native polarity' need not be correct.
(as in case of the tweet 'so i have to be up at 5am to autograph 7,000 pics of myself?
neutral
train_93655
[E1] expresses the happiness emotion through English, and the anger emotion in [E2] is expressed through both Chinese and English, while the fear emotion in [E3] is expressed through a mixed English-Chinese phrase (holdØ4).
emotions in codeswitching texts differ from monolingual texts in that they can be expressed in either monolingual or bilingual forms.
neutral
train_93656
This result indicates LinAdapt well captures the fact that users express opinions differently even with the same words.
after each evaluation run, we added an extra 1000 reviews and repeated the training and evaluation.
neutral
train_93657
Our method is inspired by a personalized ranking model adaptation method developed by .
high variance indicates the word's sentiment polarity frequently changes across different users.
neutral
train_93658
Failure to rec-ognize this difference across users will inevitably lead to inaccurate understanding of opinions.
for each testing case, we are estimating an independent classification model.
neutral
train_93659
First, "奥巴马 (Obama)" is found in the emotion keyword list and tagged.
we rank the templates according to their frequency, and adopt a dominating set algorithm (Johnson, 1974) We believe that human perception of an emotion is through recognizing important events or semantic contents.
neutral
train_93660
To solve these problems, we translate the whole sentences but with reordering constraints ensuring that the opinionated segments are preserved during translation.
(2013) compare several of these approaches.
neutral
train_93661
Second, whereas pagerank assigns an equal weight to the edges connected between an unseen word and its neighbor nodes, we consider their similarities as weights to construct a weighted graph such that neighbor nodes more similar to the unseen word may contribute more to estimate its VA ratings.
few studies have sought to predict the VA rating of words using regression-based methods (Wei et al., 2011;Malandrakis et al., 2011).
neutral
train_93662
In this paper we report on the extension of the P obs model to also consider listener's visual behaviour.
both the target intended by the IG (o t ) and the one selected by the IF (o p ) were recorded.
neutral
train_93663
A new tag addition method is proposed to solve this problem.
these semantic grammar approaches carry a high development cost and they can also lead to fragile operations since users do not typically know what grammatical constructions are supported by the system.
neutral
train_93664
Firstly, we tune λ 1 by fixing λ 2 = 0.1.
dependency parsing is a crucial component of many natural language processing systems, for tasks such as text classification (Özgür and Güngör, 2010), statistical machine translation (Xu et al., 2009), relation extraction (Bunescu and Mooney, 2005), and question answering (Cui et al., 2005).
neutral
train_93665
This shows that when trained using a small data set, the source language parser is more accurate than the supervised model.
figure 2 shows the average LAS for all 9 languages (except English) on different data sizes using different values of λ 1 .
neutral
train_93666
In general, AMR substructures are graphs.
in general, AMR substructures are graphs.
neutral
train_93667
From Table 1, we can see that the performance of the baseline transition-based system remains very stable when different dependency parsers used are trained on same data set.
for action that predicts the concept label (NEXT-NODE-l c ), we check whether the candidate concept label l c matches the frameset predicted by the semantic role labeler.
neutral
train_93668
We report final AMR parsing results that show an improvement of 7% absolute in F 1 score over the best previously reported result.
the Charniak parser that is trained on a much larger and more diverse dataset (CHARNIAK (ON)) yields the best overall AMR parsing performance.
neutral
train_93669
tain quantitative correlations between parts of the neural network and linguistic properties, in both speech (Wu and King, 2016;Alishahi et al., 2017;Wang et al., 2017) and language processing models (Köhn, 2015;Qian et al., 2016a;Adi et al., 2016;Linzen et al., 2016;Qian et al., 2016b).
we also find that representations from higher layers are better at capturing semantics, even though these are word-level labels.
neutral
train_93670
We compare our method ALLSmooth (CCWM) with Bahdanau et al.
the existing PBSMt methods that used external resources to translate unknown words for SMt are hard to be directly introduced into NMt, because of NMt's soft-alignment mechanism (Bahdanau et al., 2015).
neutral
train_93671
This dependency parsing result will be transformed in another step for traversing the tree, which will be described in the next section to create a dependency tree.
it has been shown that the combination of words and their dependency information can boost performance.
neutral
train_93672
In this work, we proposed a method in which the Seq2Dep NMT model is trained by utilizing syntactic dependencies to provide the model more abundant information.
to utilize the information of both "head" words and syntactic dependencies between them to produce better output.
neutral
train_93673
They report some improvements in translation quality arguing that the attention model has learned to better align source and target words.
in order to compare attentions of multiple systems as well as to measure the difference between attention and word alignment, we convert the hard word alignments into soft ones and use cross entropy between attention and soft alignment as a loss function.
neutral
train_93674
In word alignment, most target words are aligned to one source word.
to the best of our knowledge there is no study that provides an analysis of what kind of phenomena is being captured by attention.
neutral
train_93675
Directly learning from PA based on a forestbased objective in LLGPar is first proposed by Li et al.
second, the two settings lead to very similar performance on Biaffine, without a clear trend.
neutral
train_93676
Both GN3Par and LTPar suffer from the inexact search problem.
the first approach is based on a forest-based training objective for two CRF parsers, i.e., a biaffine neural network graph-based parser (Biaffine) and a traditional log-linear graph-based parser (LL-GPar).
neutral
train_93677
The second approach is based on the idea of constrained decoding for three parsers, i.e., a traditional linear graphbased parser (LGPar), a globally normalized neural network transition-based parser (GN3Par) and a traditional linear transition-based parser (LTPar).
as our reviewers kindly point out, more extensive experiments and systematic analysis are needed to really understand this interesting issue and provide stronger findings, which we leave for future work.
neutral
train_93678
On top of the neural network, we introduce a probabilistic structured layer, defining a conditional log-linear model over nonprojective trees.
l is not negligible comparing to n. It should be noticed that in our labeled model, for different dependency label l we use the same vector representation ϕ(x i ) for each word x i .
neutral
train_93679
Experimental results using manually annotated datasets for lexical paraphrase showed that the proposed method outperformed bilingual pivoting and distributional similarity in terms of metrics such as MRR, MAP, coverage, and Spearman's correlation.
in this study, we conducted the evaluation by applying Pearson's correlation coefficient with a five-step manual evaluation using five datasets constructed by SemEval (Agirre et al., 2012(Agirre et al., , 2013(Agirre et al., , 2014(Agirre et al., , 2015(Agirre et al., , 2016.
neutral
train_93680
Furthermore, our aim was to calculate not the strength of co-occurrence (relation) between words, but the paraphrasability.
we used this dataset to evaluate the acquired paraphrase pairs by MRR and MAP, following Pavlick et al.
neutral
train_93681
With respect to the conditional paraphrase probability and PMI, it is necessary to consider up to the 400th place to cover all correct paraphrases.
we employed the conditional paraphrase probability of bilingual pivoting given in Equation (1), the symmetric paraphrase score of PPDB given by Equation (3), and distributional similarity as baselines, and compared them with PMI shown in Equation (7) and the MIPA score given in Equation (11).
neutral
train_93682
We approximate this marginalization by constructing a tree where the root is the predicate, p, the branches are likely sequences of arguments, and the leaves are the word and label for which we need to estimate a probability, w:l. Formally, we define this tree of possible sequences as: where w f,0 :l f :0 = p:P RED; k and T are thresholds; and argmax k (q) is the k word:label pairs that have the highest probability of being the next argument given the sequence q according to the PRNSFM.
as shown in Figure 2, we use two different embedding layers, one for word values and one for semantic labels, and the two embedding vectors are concatenated before being passed to the LSTM layer.
neutral
train_93683
Our models do not require any handcrafted iSRL annotations for training, and thus can be applied to all predicates observed in large unannotated data on which they are trained.
for a good reader, a reasonable interpretation of the second loss should be that it receives the same A0 and A1 as the first instance.
neutral
train_93684
It involves more than 100 features but does not include word embeddings, and hence we compare it with the PSL models of the weak base setting.
(3) We implement and discuss the transitivity inter-and intra-layers and conclude the transitivity within the observed layer brings the most performance gain.
neutral
train_93685
We start from describing the features for each lexicon pair.
empirically, those path p whose length exceed 10 are dropped as the inference chain is too long.
neutral
train_93686
False alarms come from pairs that contain various unknown Chinese compound words that E-Hownet does not include, e.g., 分 給(distribute to) is composed of 分(issue) and 給(give).
in our PSL models, all possible feature-layered transitivities between pairs are explored.
neutral
train_93687
For recall, the feature-layer transitivity (PSL(WeakBase TrFeat)) enables the model to reach more words for a better recall, while the enrichment of the prior knowledge in PSL(WeakBase TrObv) helps to eliminate uncertainty but decreases recall.
we define the cohesion score of a semantic type with E-Hownet to model the generality.
neutral
train_93688
They also used word insertions and deletions for forced decoding, but they used a high penalty for all insertions and deletions.
for each stack, a beam of the best n hypotheses is kept to speed up the decoding process.
neutral
train_93689
All the embeddings are fine-tuned during training by back-propagating gradients.
the model performs very well despite being fully character based.
neutral
train_93690
Using higher-order n-grams requires more data for training.
following the work of Kruengkrai et al.
neutral
train_93691
Instead of using the Max-Margin criterion (Taskar et al., 2005) adopted by previous neural network models for CWS (Zheng et al., 2013;Pei et al., 2014;Chen et al., 2015a,b), we try to directly maximize the log-probability of the correct tag sequence following Lample et al.
cWS accuracies can drop gravely on cross-domain corpora.
neutral
train_93692
Further, we have created multiple documents where each document contains N continuous segments from the original threads.
of this, the proposed IB based approach requires a limited number of involved computations, more precisely, linear in terms of number of text snippets.
neutral
train_93693
Here, the concept representation c Mult is a multinomial distribution over the properties in Q.
we use an extension of LDA (Blei et al., 2003) to implement our hypotheses on the usefulness of overarching structure, both commonalities in selectional constraints across predicates, and cooccurrence of properties across concepts.
neutral
train_93694
In the SimLex-999 dataset which focuses on the word similarity, the cognition (lexical relation) and the sentiment modules turned out to be important.
noting that the different types of words may have different sensitivity toward the modules, we adjusted the relative weights for a particular aspect of interest to be from 0.1 to 3.5 while maintaining others to 1.0.
neutral
train_93695
Moreover, because several senses are assigned to the current word with probabilities, we leverage all the related senses to predict the context words.
adopting an online procedure, FCSE-1 clusters the contexts of one word incrementally.
neutral
train_93696
Some bi-gram probabilities have several peaks, and they vary in accordance with parts of speech.
the inference for NPYLM is introduced first; then, the inference of PYHSMM is explained.
neutral
train_93697
w n represents the n-th word in each sentences.
it might be difficult for the PYHSMM-D to separate these two contexts in the case of completely unsupervised training.
neutral
train_93698
We begin with a tokenized sentence which we then convert to a sentence matrix, the rows of which are word vector representations of each token.
1 For this dataset we report the Area Under Curve (AUC), rather than accuracy, because it is imbalanced.
neutral
train_93699
We fixed the filter region sizes and the number of feature maps as in the baseline configuration, thus changing only the pooling strategy or pooling region size.
before training, we under-sampled negative instances to make classes sizes equal.
neutral