text stringlengths 82 736 | label int64 0 1 |
|---|---|
in this paper , we presented a comprehensive analysis of the stylistic features isolated in the endings of the original story cloze test ( sct-v1.0 )---we retrained the stanford named entity recognizer 20 on the ontonotes data | 0 |
our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing---multiword expressions are lexical items that can be decomposed into single words and display idiosyncratic features | 0 |
this cnn-based architecture accepts multiple word embeddings as inputs---that can exploit multiple , variable sized word embeddings | 1 |
futrelle and nikolakis , 1995 ) developed a constraint grammar formalism for parsing vector-based visual displays and producing structured representations of the elements comprising the display---futrelle and nikolakis , 1995 ) developed a constraint grammar for parsing vectorbased visual displays and producing representations of the elements comprising the display | 1 |
we apply the method of turian et al , combining real-valued embeddings with discrete features in the linear baseline ,---specifically , we use the clusters with 1000 classes from turian et al , which are induced with the brown algorithm | 1 |
these features were optimized using minimum error-rate training and the same weights were then used in docent---feature weights were set with minimum error rate training on a tuning set using bleu as the objective function | 1 |
the significance test was performed using the bootstrap resampling method proposed by koehn---the statistical significance test is performed by the re-sampling approach | 1 |
instead of optimising individual word embeddings , our model uses general-purpose embeddings and optimises a separate neural component to adapt these to the specific task---available in external resources , our model uses general-purpose embeddings and optimises a separate neural component to adapt these to the specific task | 1 |
each translation model is tuned using mert to maximize bleu---in this paper , we present a predicate-argument structure analysis that simultaneously resolves the anaphora of zero pronouns | 0 |
named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance---named entity recognition is a well established information extraction task with many state of the art systems existing for a variety of languages | 1 |
for training the translation model and for decoding we used the moses toolkit---we used the phrase-based smt in moses 5 for the translation experiments | 1 |
they used the parser with the stanford dependency scheme , which defines a hierarchy of 48 grammatical relations---the word embeddings were pre-trained using skip-gram on all 1 , 043 , 064 articles in the japanese version of wikipedia | 0 |
we use word2vec tool for learning distributed word embeddings---we use the word2vec tool with the skip-gram learning scheme | 1 |
the penn discourse treebank is the largest corpus richly annotated with explicit and implicit discourse relations and their senses---after harvesting axioms from textbooks , we also present an approach to parse the axiom mentions to horn clause rules | 0 |
language models were built using the sri language modeling toolkit with modified kneser-ney smoothing---the language models used were 7-gram srilm with kneser-ney smoothing and linear interpolation | 1 |
text categorization is the problem of automatically assigning predefined categories to free text documents---text categorization is the classification of documents with respect to a set of predefined categories | 1 |
this paper introduces a method for computational analysis of move structures in abstracts of research articles---the support vector machine based machine learning approach works on discriminative approach and makes use of both positive and negative examples to learn the distinction between the two classes | 0 |
srilm toolkit is used to build these language models---trigram language models are implemented using the srilm toolkit | 1 |
following mnih and hinton , the soul model combines the neural network approach with a class-based lm---following , the soul model combines the neural network approach with a class-based lm | 1 |
word sense disambiguation ( wsd ) is the task of identifying the correct meaning of a word in context---word sense disambiguation ( wsd ) is a fundamental task and long-standing challenge in natural language processing ( nlp ) | 1 |
for german , the pos and morphological tags were obtained from rftagger which provides morphological information such as case , number and gender for nouns and tense for verbs---for german we used morphologically rich tags from rftagger , that contains morphological information such as case , number , and gender for nouns and tense for verbs | 1 |
the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd )---word sense disambiguation ( wsd ) is the task of automatically determining the correct sense for a target word given the context in which it occurs | 1 |
our results show why it is important to be precise about exactly what tree-to-dependency conversion scheme is used---when we consider tree-to-dependency conversion schemes , downstream evaluation becomes particularly important | 1 |
to make systematic benchmarking on the task possible , we vet a collection of comparison paragraphs to obtain a test set on which human performs with an accuracy 94.2 %---in the task , we have collected a set of paragraphs as the test set on which human can accomplish the task with an accuracy of 94 . 2 % | 1 |
the reference corpora and data sets are pos tagged with the ims treetagger---the stts tags are automatically added using treetagger | 1 |
the encoder and decoder are two-layer lstms with a 500-dimension hidden size and 500-dimension word embeddings---the encoder units are bidirectional lstms while the decoder unit incorporates an lstm with dot product attention | 1 |
in this study , we build both a regression model and a ranking model to evaluate user simulation---in this study , we also strive to develop a prediction model of the rankings of the simulated users | 1 |
we implement our approach in the framework of phrase-based statistical machine translation---as shown in table 3 , our approach resolves non-pronominal anaphors with the recall of 51 . 3 ( 39 . 7 ) and the precision of 90 . 4 ( 87 . 6 ) | 0 |
we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus---we train trigram language models on the training set using the sri language modeling tookit | 1 |
vo and zhang exploit the left and right context around a target in a tweet and combine low-dimensional embedding features from both contexts and the full tweet using a number of different pooling functions---vo and zhang split a tweet into a left context and a right context according to a given target , using distributed word representations and neural pooling functions to extract features | 1 |
in this paper , we aim to push multi-category bootstrapping back into its original minimally-supervised framework , with as little performance loss as possible---in doing so , we revert the multi-category bootstrapping framework back to its originally intended minimally supervised framework , with little performance | 1 |
as for ej translation , we use the stanford parser to obtain english abstraction trees---we apply the rules to each sentence with its dependency tree structure acquired from the stanford parser | 1 |
we use the moses statistical mt toolkit to perform the translation---we use a pbsmt model built with the moses smt toolkit | 1 |
relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text---translation quality is measured in truecase with bleu on the mt08 test sets | 0 |
we parse the senseval test data using the stanford parser generating the output in dependency relation format---we use the stanford dependency parser to parse the statement and identify the path connecting the content words in the parse tree | 1 |
we implemented the different aes models using scikit-learn---distributions inferred from a similarity graph are used to regularize the learning of crfs model on labeled and unlabeled data | 0 |
generative models like lda and plsa have been proved to be very successful in modeling topics and other textual information in an unsupervised manner---traditional topic models such as lda and plsa are unsupervised methods for extracting latent topics in text documents | 1 |
in this paper , we identify the knowledge diffusion in conversations and propose an endto-end neural knowledge diffusion model to deal with the problem---in this paper , we propose a neural knowledge diffusion ( nkd ) dialogue system to benefit the neural dialogue generation with the ability of both convergent and divergent thinking | 1 |
the feature weight 位 i in the log linear model is determined by using the minimum error rate training method---the parameter for each feature function in log-linear model is optimized by mert training | 1 |
this study focuses on generic summarization---study thus chooses to focus on neural extractive summarization | 1 |
we use pre-trained 100 dimensional glove word embeddings---we use pre-trained glove embeddings to represent the words | 1 |
stance detection is a difficult task since it often requires reasoning in order to determine whether an utterance is in favor of or against a specific issue---stance detection is the task of classifying the attitude previous work has assumed that either the target is mentioned in the text or that training data for every target is given | 1 |
a variety of auxiliary resources have been used to induce interlingual features , including bilingual lexicon , and unlabeled parallel sentences---to induce interlingual features , several resources have been used , including bilingual lexicon and parallel corpora | 1 |
twitter is a very popular micro blogging site---twitter is a social platform which contains rich textual content | 1 |
in this paper , we present gated self-matching networks for reading comprehension and question answering---coreference resolution is the task of partitioning the set of mentions of discourse referents in a text into classes ( or ‘ chains ’ ) corresponding to those referents ( cite-p-12-3-14 ) | 0 |
with english gigaword corpus , we use the skip-gram model as implemented in word2vec 3 to induce embeddings---alignment types are shown with the ? symbol | 0 |
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus---for the tree-based system , we applied a 4-gram language model with kneserney smoothing using srilm toolkit trained on the whole monolingual corpus | 1 |
generative topic models widely used for ir include plsa and lda---examples of topic models include plsi and lda | 1 |
for this reason , as noted by sproat et al , an sms normalization must be performed before a more conventional nlp process can be applied---therefore , a text normalization process must be performed before any conven-tional nlp process is implemented | 1 |
lexical simplification is a technique that substitutes a complex word or phrase in a sentence with a simpler synonym---lexical simplification is the task of identifying and replacing cws in a text to improve the overall understandability and readability | 1 |
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing---in this task , we use the 300-dimensional 840b glove word embeddings | 0 |
turney and littman calculate the pointwise mutual information of a given word with positive and negative sets of sentiment words---turney and littman compute the point wise mutual information of the target term with each seed positive and negative term as a measure of their semantic association | 1 |
the irstlm toolkit is used to build language models , which are scored using kenlm in the decoding process---the target language model is built on the target side of the parallel data with kneser-ney smoothing using the irstlm tool | 1 |
in this work , we propose a deep reinforcement learning framework for robust distant supervision---we propose a novel deep reinforcement learning framework for robust distant supervision | 1 |
in our implementation , we employ a kn-smoothed 7-gram model---in this and our other n-gram models , we used kneser-ney smoothing | 1 |
we selected conditional random fields as the baseline model---in this task , we used conditional random fields | 1 |
we measure translation performance by the bleu and meteor scores with multiple translation references---we use case-sensitive bleu-4 to measure the quality of translation result | 1 |
reinforcement learning , the leading approach for learning a dialogue strategy , demonstrates powerful results---reinforcement learning , the leading method for dialogue strategy learning , can yield powerful results | 1 |
we use pre-trained glove vector for initialization of word embeddings---we experiment with word2vec and glove for estimating similarity of words | 1 |
we use the glove vectors of 300 dimension to represent the input words---the language model is trained on the target side of the parallel training corpus using srilm | 0 |
lattices are learned from a dataset of automatically-annotated definitions from wikipedia---classifiers are learned from a training set of textual definitions | 1 |
an idiom is a phrase whose meaning can not be obtained compositionally , i.e. , by combining the meanings of the words that compose it---transliteration is a subtask in ne translation , which translates nes based on the phonetic similarity | 0 |
convolutional neural networks have obtained good results in text classification , which usually consist of convolutional and pooling layers---interestingly convolutional neural networks , widely used for image processing , have recently emerged as a strong class of models for nlp tasks | 1 |
the weights used during the reranking are tuned using the minimum error rate training algorithm---all the feature weights and the weight for each probability factor are tuned on the development set with minimum-error-rate training | 1 |
in particular , we used the english and spanish sides of the europarl parallel corpus---we selected the french sentences for the manual annotation from the parallel europarl corpus | 1 |
turney used mutual information to detect the best answer to questions about synonyms from test of english as a foreign language and english as a second language---turney used mutual information to choose the best answer to questions about near-synonyms in the test of english as a foreign language and english as a second language | 1 |
we used the srilm toolkit to generate the scores with no smoothing---we used the disambig tool provided by the srilm toolkit | 1 |
we demonstrate that concept drift is an important consideration---semantic role labeling ( srl ) is the process of producing such a markup | 0 |
recently there are some efforts in applying machine learning approaches to the acquisition of dialogue strategies---in recent years , machine learning techniques , in particular reinforcement learning , have been applied to the task of dialogue management | 1 |
although the corpus is annotated at the clause and phrase levels , we use the sentence-level annotations associated with the dataset in---in our experiments , we also use recognizer confidence scores and a limited number of acoustic-prosodic features ( e . g . amplitude in the speech signal ) | 0 |
marcu and wong proposed the joint probability model which directly estimates the phrase translation probabilities from the corpus in a theoretically governed way---marcu and wong argued for a different phrase-based translation modeling that directly induces a phrase-by-phrase lexicon model from word-wise data | 1 |
koehn and hoang propose factored translation models that combine feature functions to handle syntactic , morphological , and other linguistic information in a log-linear model---koehn and hoang present factored translation models as an extension to phrase-based statistical machine translation models | 1 |
experiments show that the proposed methods significantly outperform the standard vaes and can discover meaningful latent actions from these datasets---we used the implementation of random forest in scikitlearn as the classifier | 0 |
barzilay and mckeown extracted both single-and multiple-word paraphrases from a sentence-aligned corpus for use in multi-document summarization---barzilay and mckeown extract both singleand multiple-word paraphrases from a monolingual parallel corpus | 1 |
we use the term-sentence matrix to train a simple generative topic model based on lda---we use latent dirichlet allocation , or lda , to obtain a topic distribution over conversations | 1 |
we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing---we train a 5-gram language model with the xinhua portion of english gigaword corpus and the english side of the training set using the srilm toolkit | 1 |
to the best of our knowledge , no previous work has explored this aspect of user-generated text---to the best of our knowledge , there is little previous work on mining user-generated data | 1 |
the results we obtained are encouraging , considering the simplicity of our approach---we obtained results which are encouraging , considering the simplicity of our method | 1 |
bengio et al proposed a probabilistic neural network language model for word representations---bengio et al propose a feedforward neural network to train a word-level language model with a limited n-gram history | 1 |
we show how focus of attention can be used as the basis on which this decision can be made---we have shown how focus of attention can be used as the basis for a language generator | 1 |
most existing grounded language learning algorithms are either supervised or weakly-supervised---most of the recent grounded language learning algorithms rely on weaker supervision | 1 |
this paper proposes to study the problem of identifying intention posts in online discussion forums---in this paper , we study a novel problem which is also of great value , namely , intention identification , which aims to identify discussion posts | 1 |
dave et al , riloff and wiebe , bethard et al , wilson et al , yu and hatzivassiloglou , choi et al , kim and hovy , wiebe and riloff ,---dave et al , riloff and wiebe , bethard et al , pang and lee , wilson et al , yu and hatzivassiloglou , | 1 |
automatic evaluation results in terms of bleu scores are provided in table 2---case-sensitive bleu scores 4 for the europarl devtest set are shown in table 1 | 1 |
in the task of thesaurus extraction , the same overall results are obtained extracting from the web corpus as a traditional corpus of printed texts---in the task of thesaurus extraction , the same overall results are obtained extracting from the web corpus | 1 |
transliteration is the task of converting a word from one writing script to another , usually based on the phonetics of the original word---transliteration is the process of converting terms written in one language into their approximate spelling or phonetic equivalents in another language | 1 |
this has been done by representing the word meaning in context as a point in a high-dimensional semantics space---we perform pre-training using the skip-gram nn architecture available in the word2vec 13 tool | 0 |
we used the svd implementation provided in the scikit-learn toolkit---for all classifiers , we used the scikit-learn implementation | 1 |
word alignment is the task of identifying translational relations between words in parallel corpora , in which a word at one language is usually translated into several words at the other language ( fertility model ) ( cite-p-18-1-0 )---word alignment , which can be defined as an object for indicating the corresponding words in a parallel text , was first introduced as an intermediate result of statistical translation models ( cite-p-13-1-2 ) | 1 |
we use the moses statistical mt toolkit to perform the translation---researchers have achieved promising improvements in tree-based machine translation | 0 |
we implement classification models using keras and scikit-learn---we use the scikit-learn toolkit as our underlying implementation | 1 |
we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit---we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit | 1 |
case-insensitive nist bleu was used to measure translation performance---the translation quality is evaluated by case-insensitive bleu and ter metrics using multeval | 1 |
word embedding approaches like word2vec or glove are powerful tools for the semantic analysis of natural language---word embeddings are considered one of the key building blocks in natural language processing and are widely used for various applications | 1 |
relation extraction is the task of detecting and characterizing semantic relations between entities from free text---relation extraction ( re ) is the task of recognizing relationships between entities mentioned in text | 1 |
lexical cohesion is defined as the cohesion that arises from semantic relationships between words---lexical chains are a representation of lexical cohesion as sequences of semantically related words | 1 |
mitchell and lapata investigated a variety of compositional operators to combine word vectors into phrasal representations---mitchell and lapata presented a framework for representing the meaning of phrases and sentences in vector space | 1 |
we use srilm for n-gram language model training and hmm decoding---we use 5-grams for all language models implemented using the srilm toolkit | 1 |
a 4-grams language model is trained by the srilm toolkit---we trained a 5-grams language model by the srilm toolkit | 1 |
kim and hovy try to determine the final sentiment orientation of a given sentence by combining sentiment words within it---kim and hovy build three models to assign a sentiment category to a given sentence by combining the individual sentiments of sentimentbearing words | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.