context stringlengths 3.85k 99.8k | questions listlengths 1 12 | answers listlengths 1 12 |
|---|---|---|
Automatic judgment prediction is to train a machine judge to determine whether a certain plea in a given civil case would be supported or rejected. In countries with civil law system, e.g. mainland China, such process should be done with reference to related law articles and the fact description, as is performed by a h... | [
"what are their results on the constructed dataset?",
"what evaluation metrics are reported?",
"what civil field is the dataset about?",
"what are the state-of-the-art models?",
"what is the size of the real-world civil case dataset?",
"what datasets are used in the experiment?"
] | [
[
""
],
[
"",
""
],
[
"",
""
],
[
"",
""
],
[
"100 000 documents",
""
],
[
""
]
] |
computational sociolinguistics, dehumanization, lexical variation, language change, media, New York Times, LGBTQWarning: this paper contains material that some may find offensive or upsetting.Despite the American public's increasing acceptance of LGBTQ people and recent legal successes, LGBTQ individuals frequently rem... | [
"Do they model semantics ",
"How do they identify discussions of LGBTQ people in the New York Times?",
"Do they analyze specific derogatory words?"
] | [
[
"",
""
],
[
""
],
[
"",
""
]
] |
Language model pretraining has advanced the state of the art in many NLP tasks ranging from sentiment analysis, to question answering, natural language inference, named entity recognition, and textual similarity. State-of-the-art pretrained models include ELMo BIBREF1, GPT BIBREF2, and more recently Bidirectional Encod... | [
"What is novel about their document-level encoder?",
"What rouge score do they achieve?",
"What are the datasets used for evaluation?"
] | [
[
""
],
[
"Best results on unigram:\nCNN/Daily Mail: Rogue F1 43.85\nNYT: Rogue Recall 49.02\nXSum: Rogue F1 38.81",
"Highest scores for ROUGE-1, ROUGE-2 and ROUGE-L on CNN/DailyMail test set are 43.85, 20.34 and 39.90 respectively; on the XSum test set 38.81, 16.50 and 31.27 and on the NYT test set... |
This work is licenced under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/In the spirit of the brevity of social media's messages and reactions, people have got used to express feelings minimally and symbolically, as with hashtags on Twitter and In... | [
"What was their performance on emotion detection?",
"Which existing benchmarks did they compare to?",
"Which Facebook pages did they look at?"
] | [
[
"Answer with content missing: (Table 3) Best author's model B-M average micro f-score is 0.409, 0.459, 0.411 on Affective, Fairy Tales and ISEAR datasets respectively. "
],
[
"",
""
],
[
"",
""
]
] |
Microblogging such as Twitter and Weibo is a popular social networking service, which allows users to post messages up to 140 characters. There are millions of active users on the platform who stay connected with friends. Unfortunately, spammers also use it as a tool to post malicious links, send unsolicited messages t... | [
"LDA is an unsupervised method; is this paper introducing an unsupervised approach to spam detection?",
"What is the benchmark dataset and is its quality high?",
"How do they detect spammers?"
] | [
[
"",
""
],
[
"Social Honeypot dataset (public) and Weibo dataset (self-collected); yes",
"Social Honeypot, which is not of high quality"
],
[
"Extract features from the LDA model and use them in a binary classification task"
]
] |
Automatic summarization has enjoyed wide popularity in natural language processing due to its potential for various information access applications. Examples include tools which aid users navigate and digest web content (e.g., news, social media, product reviews), question answering, and personalized recommendation eng... | [
"Do they use other evaluation metrics besides ROUGE?",
"What is their ROUGE score?",
"What are the baselines?"
] | [
[
"",
""
],
[
""
],
[
"",
"Answer with content missing: (Experimental Setup missing subsections)\nTo be selected: We compared REFRESH against a baseline which simply selects the first m leading sentences from each document (LEAD) and two neural models similar to ours (see left block in F... |
Adversarial examples, a term introduced in BIBREF0, are inputs transformed by small perturbations that machine learning models consistently misclassify. The experiments are conducted in the context of computer vision (CV), and the core idea is encapsulated by an illustrative example: after imperceptible noises are adde... | [
"What datasets do they use?",
"What other factors affect the performance?",
"What are the benchmark attacking methods?"
] | [
[
"",
"1 IMDB dataset and 2 Yelp datasets"
],
[
""
],
[
"",
""
]
] |
End-to-end speech-to-text translation (ST) has attracted much attention recently BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 given its simplicity against cascading automatic speech recognition (ASR) and machine translation (MT) systems. The lack of labeled data, however, has become a major blocker for bridging the perf... | [
"What domains are covered in the corpus?",
"What is the architecture of their model?",
"How was the dataset collected?",
"Which languages are part of the corpus?",
"How is the quality of the data empirically evaluated? ",
"Is the data in CoVoST annotated for dialect?",
"Is Arabic one of the 11 languages... | [
[
"No specific domain is covered in the corpus."
],
[
""
],
[
"",
""
],
[
"",
""
],
[
"",
""
],
[
""
],
[
"",
""
],
[
"",
""
]
] |
Relation classification is the task of assigning sentences with two marked entities to a predefined set of relations. The sentence “We poured the <e1>milk</e1> into the <e2>pumpkin mixture</e2>.”, for example, expresses the relation Entity-Destination(e1,e2). While early research mostly focused on support vector machin... | [
"By how much does their best model outperform the state-of-the-art?",
"Which dataset do they train their models on?",
"How does their simple voting scheme work?",
"Which variant of the recurrent neural network do they use?",
"How do they obtain the new context represetation?"
] | [
[
"0.8% F1 better than the best state-of-the-art",
"Best proposed model achieves F1 score of 84.9 compared to best previous result of 84.1."
],
[
"",
""
],
[
"",
"Among all the classes predicted by several models, for each test sentence, class with most votes are picked. In case of a... |
In recent years many datasets have been created for the task of automated stance detection, advancing natural language understanding systems for political science, opinion research and other application areas. Typically, such benchmarks BIBREF0 are composed of short pieces of text commenting on politicians or public is... | [
"Does the paper report the performance of the model for each individual language?",
"What is the performance of the baseline?",
"Did they pefrorm any cross-lingual vs single language evaluation?",
"What was the performance of multilingual BERT?",
"What annotations are present in dataset?"
] | [
[
"",
"",
""
],
[
"M-Bert had 76.6 F1 macro score.",
"75.1% and 75.6% accuracy"
],
[
""
],
[
"BERT had 76.6 F1 macro score on x-stance dataset."
],
[
""
]
] |
To structure an unordered document is an essential task in many applications. It is a post-requisite for applications like multiple document extractive text summarization where we have to present a summary of multiple documents. It is a prerequisite for applications like question answering from multiple documents where... | [
"What is an unordered text document, do these arise in real-world corpora?",
"What kind of model do they use?",
"Do they release a data set?",
"Do they release code?",
"Which languages do they evaluate on?"
] | [
[
"A unordered text document is one where sentences in the document are disordered or jumbled. It doesn't appear that unordered text documents appear in corpora, but rather are introduced as part of processing pipeline."
],
[
""
],
[
"",
""
],
[
"",
""
],
[
"",
""
]... |
Privacy policies are the documents which disclose the ways in which a company gathers, uses, shares and manages a user's data. As legal documents, they function using the principle of notice and choice BIBREF0, where companies post their policies, and theoretically, users read the policies and decide to use a company's... | [
"Are the experts comparable to real-world users?",
"Are the answers double (and not triple) annotated?",
"Who were the experts used for annotation?",
"What type of neural model was used?",
"Were other baselines tested to compare with the neural baseline?"
] | [
[
""
],
[
""
],
[
"Individuals with legal training",
""
],
[
"",
""
],
[
"",
""
]
] |
We understand from Zipf's Law that in any natural language corpus a majority of the vocabulary word types will either be absent or occur in low frequency. Estimating the statistical properties of these rare word types is naturally a difficult task. This is analogous to the curse of dimensionality when we deal with sequ... | [
"Does the paper clearly establish that the challenges listed here exist in this dataset and task?",
"Is this hashtag prediction task an established task, or something new?",
"What is the word-level baseline?",
"What other tasks do they test their method on?",
"what is the word level baseline they compare to... | [
[
""
],
[
"established task",
""
],
[
"",
""
],
[
"None"
],
[
"",
""
]
] |
Word Sense Disambiguation (WSD) is a fundamental task and long-standing challenge in Natural Language Processing (NLP), which aims to find the exact sense of an ambiguous word in a particular context BIBREF0. Previous WSD approaches can be grouped into two main categories: knowledge-based and supervised methods.Knowled... | [
"What is the state of the art system mentioned?",
"Do they incoprorate WordNet into the model?",
"Is SemCor3.0 reflective of English language data in general?",
"Do they use large or small BERT?",
"How does the neural network architecture accomodate an unknown amount of senses per word?"
] | [
[
"Two knowledge-based systems,\ntwo traditional word expert supervised systems, six recent neural-based systems, and one BERT feature-based system."
],
[
"",
""
],
[
"",
""
],
[
"small BERT",
"small BERT"
],
[
""
]
] |
The task of document quality assessment is to automatically assess a document according to some predefined inventory of quality labels. This can take many forms, including essay scoring (quality = language quality, coherence, and relevance to a topic), job application filtering (quality = suitability for role + visual/... | [
"Which fonts are the best indicators of high quality?",
"What kind of model do they use?",
"Did they release their data set of academic papers?",
"Do the methods that work best on academic papers also work best on Wikipedia?",
"What is their system's absolute accuracy?",
"Which is more useful, visual or t... | [
[
""
],
[
"",
""
],
[
"",
""
],
[
"",
""
],
[
"59.4% on wikipedia dataset, 93.4% on peer-reviewed archive AI papers, 77.1% on peer-reviewed archive Computation and Language papers, and 79.9% on peer-reviewed archive Machine Learning papers"
],
[
"It depends o... |
In the field of natural language processing (NLP), the most prevalent neural approach to obtaining sentence representations is to use recurrent neural networks (RNNs), where words in a sentence are processed in a sequential and recurrent manner. Along with their intuitive design, RNNs have shown outstanding performance... | [
"Which models did they experiment with?",
"What were their best results on the benchmark datasets?",
"What were the baselines?",
"Which datasets were used?"
] | [
[
""
],
[
"",
""
],
[
""
],
[
"",
""
]
] |
Modern Standard Arabic (MSA) and Classical Arabic (CA) have two types of vowels, namely long vowels, which are explicitly written, and short vowels, aka diacritics, which are typically omitted in writing but are reintroduced by readers to properly pronounce words. Since diacritics disambiguate the sense of the words in... | [
"what datasets were used?",
"what are the previous state of the art?",
"what surface-level features are used?",
"what linguistics features are used?"
] | [
[
"",
""
],
[
"",
""
],
[
""
],
[
"POS, gender/number and stem POS"
]
] |
Ambiguity and implicitness are inherent properties of natural language that cause challenges for computational models of language understanding. In everyday communication, people assume a shared common ground which forms a basis for efficiently resolving ambiguities and for inferring implicit information. Thus, recover... | [
"what dataset statistics are provided?",
"what is the size of their dataset?",
"what crowdsourcing platform was used?",
"how was the data collected?"
] | [
[
"More than 2,100 texts were paired with 15 questions each, resulting in a total number of approx. 32,000 annotated questions. 13% of the questions are not answerable. Out of the answerable questions, 10,160 could be answered from the text directly (text-based) and 3,914 questions required the use of commonse... |
In the age of information dissemination without quality control, it has enabled malicious users to spread misinformation via social media and aim individual users with propaganda campaigns to achieve political and financial gains as well as advance a specific agenda. Often disinformation is complied in the two major fo... | [
"What is best performing model among author's submissions, what performance it had?",
"What extracted features were most influencial on performance?",
"Did ensemble schemes help in boosting peformance, by how much?",
"Which basic neural architecture perform best by itself?",
"What participating systems had ... | [
[
"For SLC task, the \"ltuorp\" team has the best performing model (0.6323/0.6028/0.6649 for F1/P/R respectively) and for FLC task the \"newspeak\" team has the best performing model (0.2488/0.2863/0.2201 for F1/P/R respectively)."
],
[
"Linguistic",
""
],
[
"The best ensemble topped th... |
Massive Open Online Courses (MOOCs) have strived to bridge the social gap in higher education by bringing quality education from reputed universities to students at large. Such massive scaling through online classrooms, however, disrupt co-located, synchronous two-way communication between the students and the instruct... | [
"Do they report results only on English data?",
"What aspects of discussion are relevant to instructor intervention, according to the attention mechanism?",
"What was the previous state of the art for this task?",
"What type of latent context is used to predict instructor intervention?"
] | [
[
"",
""
],
[
""
],
[
"",
""
],
[
""
]
] |
We build and test our MMT models on the Multi30K dataset BIBREF21 . Each image in Multi30K contains one English (EN) description taken from Flickr30K BIBREF22 and human translations into German (DE), French (FR) and Czech BIBREF23 , BIBREF24 , BIBREF25 . The dataset contains 29,000 instances for training, 1,014 for dev... | [
"Do they report results only on English dataset?",
"What dataset does this approach achieve state of the art results on?"
] | [
[
"",
""
],
[
""
]
] |
Pre-trained models BIBREF0, BIBREF1 have received much of attention recently thanks to their impressive results in many down stream NLP tasks. Additionally, multilingual pre-trained models enable many NLP applications for other languages via zero-short cross-lingual transfer. Zero-shot cross-lingual transfer has shown ... | [
"How much training data from the non-English language is used by the system?",
"Is the system tested on low-resource languages?",
"What languages are the model transferred to?",
"How is the model transferred to other languages?",
"What metrics are used for evaluation?",
"What datasets are used for evaluat... | [
[
"No data. Pretrained model is used."
],
[
"",
""
],
[
"",
""
],
[
"Build a bilingual language model, learn the target language specific parameters starting from a pretrained English LM , fine-tune both English and target model to obtain the bilingual LM."
],
[
"",
... |
Users of photo-sharing websites such as Flickr often provide short textual descriptions in the form of tags to help others find the images. With the availability of GPS systems in current electronic devices such as smartphones, latitude and longitude coordinates are nowadays commonly made available as well. The tags as... | [
"what are the existing approaches?",
"what dataset is used in this paper?"
] | [
[
""
],
[
"",
""
]
] |
Keyphrase generation is the task of automatically predicting keyphrases given a source text. Desired keyphrases are often multi-word units that summarize the high-level meaning and highlight certain important topics or information of the source text. Consequently, models that can successfully perform this task should b... | [
"How is keyphrase diversity measured?",
"How was the StackExchange dataset collected?",
"What does the TextWorld ACG dataset contain?",
"What is the size of the StackExchange dataset?",
"What were the baselines?",
"What two metrics are proposed?"
] | [
[
""
],
[
"they obtained computer science related topics by looking at titles and user-assigned tags",
""
],
[
""
],
[
"",
"around 332k questions"
],
[
"CopyRNN (Meng et al., 2017), Multi-Task (Ye and Wang, 2018), and TG-Net (Chen et al., 2018b)",
""
],
[
""
... |
It is well known that language has certain structural properties which allows natural language speakers to make “infinite use of finite means" BIBREF3 . This structure allows us to generalize beyond the typical machine learning definition of generalization BIBREF4 (which considers performance on the distribution that g... | [
"Can the findings of this paper be generalized to a general-purpose task?",
"Why does the proposed task a good proxy for the general-purpose sequence to sequence tasks?"
] | [
[
"",
""
],
[
""
]
] |
The apparent rise in political incivility has attracted substantial attention from scholars in recent years. These studies have largely focused on the extent to which politicians and elected officials are increasingly employing rhetoric that appears to violate norms of civility BIBREF0 , BIBREF1 . For the purposes of o... | [
"What was the baseline?",
"What was their system's performance?",
"What other political events are included in the database?",
"What classifier did they use?"
] | [
[
"",
""
],
[
"",
""
],
[
""
],
[
""
]
] |
“Ché saetta previsa vien più lenta.”– Dante Alighieri, Divina Commedia, ParadisoAntisocial behavior is a persistent problem plaguing online conversation platforms; it is both widespread BIBREF0 and potentially damaging to mental and emotional health BIBREF1, BIBREF2. The strain this phenomenon puts on community maintai... | [
"What labels for antisocial events are available in datasets?",
"What are two datasets model is applied to?"
] | [
[
"The Conversations Gone Awry dataset is labelled as either containing a personal attack from withint (i.e. hostile behavior by one user in the conversation directed towards another) or remaining civil throughout. The Reddit Change My View dataset is labelled with whether or not a coversation eventually had a ... |
Coronavirus disease 2019 (COVID-19) is an infectious disease that has affected more than one million individuals all over the world and caused more than 55,000 deaths, as of April 3 in 2020. The science community has been working very actively to understand this new disease and make diagnosis and treatment guidelines b... | [
"What is the CORD-19 dataset?",
"How large is the collection of COVID-19 literature?"
] | [
[
"",
""
],
[
""
]
] |
Automatic summarization, machine translation, question answering, and semantic parsing operations are useful for processing, analyzing, and extracting meaningful information from text. However, when applied to long texts, these tasks usually require some minimal syntactic structure to be identified, such as sentences B... | [
"Which deep learning architecture do they use for sentence segmentation?",
"How do they utilize unlabeled data to improve model representations?"
] | [
[
"",
""
],
[
""
]
] |
A growing body of work on adversarial examples has identified that for machine-learning (ML) systems that operate on high-dimensional data, for nearly every natural input there exists a small perturbation of the point that will be misclassified by the system, posing a threat to its deployment in certain critical settin... | [
"What is the McGurk effect?",
"Are humans and machine learning systems fooled by the same kinds of illusions?"
] | [
[
"a perceptual illusion, where listening to a speech sound while watching a mouth pronounce a different sound changes how the audio is heard",
"When the perception of what we hear is influenced by what we see."
],
[
""
]
] |
Machine translation has made remarkable progress, and studies claiming it to reach a human parity are starting to appear BIBREF0. However, when evaluating translations of the whole documents rather than isolated sentences, human raters show a stronger preference for human over machine translation BIBREF1. These finding... | [
"how many humans evaluated the results?",
"what was the baseline?",
"what phenomena do they mention is hard to capture?",
"by how much did the BLEU score improve?"
] | [
[
"",
""
],
[
"",
""
],
[
"Four discourse phenomena - deixis, lexical cohesion, VP ellipsis, and ellipsis which affects NP inflection."
],
[
"On average 0.64 "
]
] |
The explosion of available scientific articles in the Biomedical domain has led to the rise of Biomedical Information Extraction (BioIE). BioIE systems aim to extract information from a wide spectrum of articles including medical literature, biological literature, electronic health records, etc. that can be used by cli... | [
"What is NER?",
"Does the paper explore extraction from electronic health records?"
] | [
[
"",
"Named Entity Recognition, including entities such as proteins, genes, diseases, treatments, drugs, etc. in the biomedical domain"
],
[
""
]
] |
This paper introduces jiant, an open source toolkit that allows researchers to quickly experiment on a wide array of NLU tasks, using state-of-the-art NLP models, and conduct experiments on probing, transfer learning, and multitask training. jiant supports many state-of-the-art Transformer-based models implemented by H... | [
"Does jiant involve datasets for the 50 NLU tasks?",
"Is jiant compatible with models in any programming language?"
] | [
[
""
],
[
"",
""
]
] |
Neural networks have been successfully used to describe images with text using sequence-to-sequence models BIBREF0. However, the results are simple and dry captions which are one or two phrases long. Humans looking at a painting see more than just objects. Paintings stimulate sentiments, metaphors and stories as well. ... | [
"What models are used for painting embedding and what for language style transfer?",
"What applicability of their approach is demonstrated by the authors?",
"What limitations do the authors demnostrate of their model?",
"How does final model rate on Likert scale?",
"How big is English poem description of th... | [
[
""
],
[
""
],
[
"",
""
],
[
"",
""
],
[
""
],
[
"",
""
]
] |
Text-based games became popular in the mid 80s with the game series Zork BIBREF1 resulting in many different text-based games being produced and published BIBREF2. These games use a plain text description of the environment and the player has to interact with them by writing natural-language commands. Recently, there h... | [
"How better does new approach behave than existing solutions?",
"How is trajectory with how rewards extracted?",
"On what Text-Based Games are experiments performed?",
"How do the authors show that their learned policy generalize better than existing solutions to unseen games?"
] | [
[
"",
"On Coin Collector, proposed model finds shorter path in fewer number of interactions with enironment.\nOn Cooking World, proposed model uses smallest amount of steps and on average has bigger score and number of wins by significant margin."
],
[
""
],
[
"",
""
],
[
""
]
... |
The performance of machines often crucially depend on the amount and quality of the data used for training. It has become increasingly ubiquitous to manipulate data to improve learning, especially in low data regime or in presence of low-quality datasets (e.g., imbalanced labels). For example, data augmentation applies... | [
"How much is classification performance improved in experiments for low data regime and class-imbalance problems?",
"What off-the-shelf reward learning algorithm from RL for joint data manipulation learning and model training is adapted?"
] | [
[
"Low data: SST-5, TREC, IMDB around 1-2 accuracy points better than baseline\nImbalanced labels: the improvement over the base model increases as the data gets more imbalanced, ranging from around 6 accuracy points on 100:1000 to over 20 accuracy points on 20:1000"
],
[
"",
""
]
] |
Understanding the emotions expressed in a text or message is of high relevance nowadays. Companies are interested in this to get an understanding of the sentiment of their current customers regarding their products and the sentiment of their potential customers to attract new ones. Moreover, changes in a product or a c... | [
"What subtasks did they participate in?",
"What were the scores of their system?",
"How was the training data translated?",
"What dataset did they use?",
"What other languages did they translate the data from?",
"What semi-supervised learning is applied?"
] | [
[
"Answer with content missing: (Subscript 1: \"We did not participate in subtask 5 (E-c)\") Authors participated in EI-Reg, EI-Oc, V-Reg and V-Oc subtasks."
],
[
""
],
[
"",
""
],
[
" Selection of tweets with for each tweet a label describing the intensity of the emotion or sentimen... |
The lack of annotated training and evaluation data for many tasks and domains hinders the development of computational models for the majority of the world's languages BIBREF0, BIBREF1, BIBREF2. The necessity to guide and advance multilingual and cross-lingual NLP through annotation efforts that follow cross-lingually ... | [
"How were the datasets annotated?",
"What are the 12 languages covered?"
] | [
[
""
],
[
"Chinese Mandarin, Welsh, English, Estonian, Finnish, French, Hebrew, Polish, Russian, Spanish, Kiswahili, Yue Chinese",
"Chinese Mandarin, Welsh, English, Estonian, Finnish, French, Hebrew, Polish, Russian, Spanish, Kiswahili, Yue Chinese"
]
] |
Multi-document summarization (MDS), the transformation of a set of documents into a short text containing their most important aspects, is a long-studied problem in NLP. Generated summaries have been shown to support humans dealing with large document collections in information seeking tasks BIBREF0 , BIBREF1 , BIBREF2... | [
"Does the corpus contain only English documents?",
"What type of evaluation is proposed for this task?",
"What baseline system is proposed?",
"How were crowd workers instructed to identify important elements in large document collections?",
"Which collections of web documents are included in the corpus?",
... | [
[
"",
"",
""
],
[
"Answer with content missing: (Evaluation Metrics section) Precision, Recall, F1-scores, Strict match, METEOR, ROUGE-2"
],
[
"Answer with content missing: (Baseline Method section) We implemented a simple approach inspired by previous work on concept map generation and ... |
Language modeling is a probabilistic description of language phenomenon. It provides essential context to distinguish words which sound similar and therefore has one of the most useful applications in Natural Language Processing (NLP) especially in downstreaming tasks like Automatic Speech Recognition (ASR). Recurrent ... | [
"Is the LSTM baseline a sub-word model?",
"How is pseudo-perplexity defined?"
] | [
[
"",
""
],
[
"Answer with content missing: (formulas in selection): Pseudo-perplexity is perplexity where conditional joint probability is approximated."
]
] |
What would be possible if a person had an oracle that could immediately provide the answer to any question about the visual world? Sight-impaired users could quickly and reliably figure out the denomination of their currency and so whether they spent the appropriate amount for a product BIBREF0 . Hikers could immediate... | [
"What is the model architecture used?",
"How is the data used for training annotated?"
] | [
[
"LSTM to encode the question, VGG16 to extract visual features. The outputs of LSTM and VGG16 are multiplied element-wise and sent to a softmax layer.",
""
],
[
"The number of redundant answers to collect from the crowd is predicted to efficiently capture the diversity of all answers from all visu... |
CLIR systems retrieve documents written in a language that is different from search query language BIBREF0 . The primary objective of CLIR is to translate or project a query into the language of the document repository BIBREF1 , which we refer to as Retrieval Corpus (RC). To this end, common CLIR approaches translate s... | [
"what quantitative analysis is done?",
"what are the baselines?"
] | [
[
"Answer with content missing: (Evaluation section) Given that in CLIR the primary goal is to get a better ranked list of documents against a translated query, we only report Mean Average Precision (MAP)."
],
[
"",
""
]
] |
With the availability of rich data on users' locations, profiles and search history, personalization has become the leading trend in large-scale information retrieval. However, efficiency through personalization is not yet the most suitable model when tackling domain-specific searches. This is due to several factors, s... | [
"Do they report results only on English data?",
"What machine learning and deep learning methods are used for RQE?"
] | [
[
"",
""
],
[
""
]
] |
Spoken Dialogue Systems (SDS) allow human-computer interaction using natural speech. Task-oriented dialogue systems, the focus of this work, help users achieve goals such as finding restaurants or booking flights BIBREF0 .Teaching a system how to respond appropriately in a task-oriented setting is non-trivial. In state... | [
"by how much did nus outperform abus?",
"what corpus is used to learn behavior?"
] | [
[
"Average success rate is higher by 2.6 percent points."
],
[
"",
""
]
] |
Text classification has become an indispensable task due to the rapid growth in the number of texts in digital form available online. It aims to classify different texts, also called documents, into a fixed number of predefined categories, helping to organize data, and making easier for users to find the desired inform... | [
"Which dataset has been used in this work?",
"What can word subspace represent?"
] | [
[
"",
"The Reuters-8 dataset (with stop words removed)"
],
[
"Word vectors, usually in the context of others within the same class"
]
] |
Medical text mining is an exciting area and is becoming attractive to natural language processing (NLP) researchers. Clinical notes are an example of text in the medical area that recent work has focused on BIBREF0, BIBREF1, BIBREF2. This work studies abbreviation disambiguation on clinical notes BIBREF3, BIBREF4, spec... | [
"How big are improvements of small-scale unbalanced datasets when sentence representation is enhanced with topic information?",
"To what baseline models is proposed model compared?",
"How big is dataset for testing?",
"What existing dataset is re-examined and corrected for training?"
] | [
[
"",
"it has 0.024 improvement in accuracy comparing to ELMO Only and 0.006 improvement in F1 score comparing to ELMO Only too"
],
[
"",
""
],
[
"30 terms, each term-sanse pair has around 15 samples for testing"
],
[
""
]
] |
Language modelling in its inception had one-hot vector encoding of words. However, it captures only alphabetic ordering but not the word semantic similarity. Vector space models helps to learn word representations in a lower dimensional space and also captures semantic similarity. Learning word embedding aids in natura... | [
"What are the qualitative experiments performed on benchmark datasets?",
"How does this approach compare to other WSD approaches employing word embeddings?"
] | [
[
"Spearman correlation values of GM_KL model evaluated on the benchmark word similarity datasets.\nEvaluation results of GM_KL model on the entailment datasets such as entailment pairs dataset created from WordNet, crowdsourced dataset of 79 semantic relations labelled as entailed or not and annotated distribu... |
In recent years, gender has become a hot topic within the political, societal and research spheres. Numerous studies have been conducted in order to evaluate the presence of women in media, often revealing their under-representation, such as the Global Media Monitoring Project BIBREF0. In the French context, the CSA BI... | [
"What tasks did they use to evaluate performance for male and female speakers?",
"What is the goal of investigating NLP gender bias specifically in the news broadcast domain and Anchor role?",
"Which corpora does this paper analyse?",
"How many categories do authors define for speaker role?",
"How big is im... | [
[
"",
""
],
[
"",
""
],
[
"",
""
],
[
""
],
[
""
],
[
""
]
] |
- !`Socorro, me ha picado una víbora!- ?`Cobra?- No, gratis.[5]Google Translation:- Help, I was bitten by a snake!- Does it charge?- Not free.[4]https://github.com/bfarzin/haha_2019_final, Accessed on 19 June 2019 [5]https://www.fluentin3months.com/spanish-jokes/, Accessed on 19 June 2019Humor does not translate well b... | [
"What did the best systems use for their model?",
"What were their results on the classification and regression tasks"
] | [
[
""
],
[
"",
"F1 score result of 0.8099"
]
] |
A Winograd schema (Levesque, Davis, and Morgenstern 2012) is a pair of sentences, or of short texts, called the elements of the schema, that satisfy the following constraints:The following is an example of a Winograd schema:Here, the two sentences differ only in the last word: `large' vs. `small'. The ambiguous pronoun... | [
"Do the authors conduct experiments on the tasks mentioned?",
"Did they collect their own datasets?",
"What data do they look at?",
"What language do they explore?"
] | [
[
"",
""
],
[
""
],
[
""
],
[
"",
""
]
] |
Many research attempts have proposed novel features that improve the performance of learning algorithms in particular tasks. Such features are often motivated by domain knowledge or manual labor. Although useful and often state-of-the-art, adapting such solutions on NLP systems across tasks can be tricky and time-consu... | [
"Do they report results only on English datasets?",
"Which hyperparameters were varied in the experiments on the four tasks?",
"Which other hyperparameters, other than number of clusters are typically evaluated in this type of research?",
"How were the cluster extracted? "
] | [
[
"",
""
],
[
"number of clusters, seed value in clustering, selection of word vectors, window size and dimension of embedding",
""
],
[
""
],
[
"Word clusters are extracted using k-means on word embeddings"
]
] |
Grammar induction is the task of inducing hierarchical syntactic structure from data. Statistical approaches to grammar induction require specifying a probabilistic grammar (e.g. formalism, number and shape of rules), and fitting its parameters through optimization. Early work found that it was difficult to induce pro... | [
"what were the evaluation metrics?",
"what are the state of the art methods?",
"what english datasets were used?",
"which chinese datasets were used?"
] | [
[
"",
"Unlabeled sentence-level F1, perplexity, grammatically judgment performance"
],
[
"",
""
],
[
"Answer with content missing: (Data section) Penn Treebank (PTB)"
],
[
"Answer with content missing: (Data section) Chinese with version 5.1 of the Chinese Penn Treebank (CTB)"
... |
10pt1.10pt[ Characterizing Political Fake News in Twitter by its Meta-DataJulio Amador Díaz LópezAxel Oehmichen Miguel Molina-Solana( j.amador, axelfrancois.oehmichen11, mmolinas@imperial.ac.uk ) Imperial College London This article presents a preliminary approach towards characterizing political fake news on Twitter t... | [
"What were their distribution results?",
"How did they determine fake news tweets?",
"What is their definition of tweets going viral?",
"What are the characteristics of the accounts that spread fake news?",
"What is the threshold for determining that a tweet has gone viral?",
"How is the ground truth for ... | [
[
"Distributions of Followers, Friends and URLs are significantly different between the set of tweets containing fake news and those non containing them, but for Favourites, Mentions, Media, Retweets and Hashtags they are not significantly different"
],
[
"an expert annotator determined if the tweet fel... |
Visual dialog BIBREF0 is an interesting new task combining the research efforts from Computer Vision, Natural Language Processing and Information Retrieval. While BIBREF1 presents some tips and tricks for VQA 2.0 Challenge, we follow their guidelines for the Visual Dialog challenge 2018. Our models use attention simila... | [
"What was the baseline?",
"Which three discriminative models did they use?"
] | [
[
""
],
[
"",
""
]
] |
Ancient Chinese is the writing language in ancient China. It is a treasure of Chinese culture which brings together the wisdom and ideas of the Chinese nation and chronicles the ancient cultural heritage of China. Learning ancient Chinese not only helps people to understand and inherit the wisdom of the ancients, but a... | [
"what NMT models did they compare with?",
"Where does the ancient Chinese dataset come from?"
] | [
[
""
],
[
"",
"Ancient Chinese history records in several dynasties and articles written by celebrities during 1000BC-200BC collected from the internet "
]
] |
Attempts toward constructing human-like dialogue agents have met significant difficulties, such as maintaining conversation consistency BIBREF0. This is largely due to inabilities of dialogue agents to engage the user emotionally because of an inconsistent personality BIBREF1. Many agents use personality models that at... | [
"How many different characters were in dataset?",
"How does dataset model character's profiles?",
"How big is the difference in performance between proposed model and baselines?",
"What baseline models are used?"
] | [
[
"",
""
],
[
""
],
[
"Metric difference between Aloha and best baseline score:\nHits@1/20: +0.061 (0.3642 vs 0.3032)\nMRR: +0.0572(0.5114 vs 0.4542)\nF1: -0.0484 (0.3901 vs 0.4385)\nBLEU: +0.0474 (0.2867 vs 0.2393)"
],
[
"",
""
]
] |
Task-oriented dialogue systems are primarily designed to search and interact with large databases which contain information pertaining to a certain dialogue domain: the main purpose of such systems is to assist the users in accomplishing a well-defined task such as flight booking BIBREF0, tourist information BIBREF1, r... | [
"Was PolyReponse evaluated against some baseline?",
"What metric is used to evaluate PolyReponse system?",
"How does PolyResponse architecture look like?",
"In what 8 languages is PolyResponse engine used for restourant search and booking system?"
] | [
[
"",
""
],
[
""
],
[
""
],
[
"English, German, Spanish, Mandarin, Polish, Russian, Korean and Serbian",
""
]
] |
Text summarization generates summaries from input documents while keeping salient information. It is an important task and can be applied to several real-world applications. Many methods have been proposed to solve the text summarization problem BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . There are two main text summarizat... | [
"Why masking words in the decoder is helpful?",
"What is the ROUGE score of the highest performing model?",
"How are the different components of the model trained? Is it trained end-to-end?",
"When is this paper published?"
] | [
[
""
],
[
"",
""
],
[
"",
""
],
[
""
]
] |
Question answering (QA) has been a blooming research field for the last decade. Selection-based QA implies a family of tasks that find answer contexts from large data given questions in natural language. Three tasks have been proposed for selection-based QA. Given a document, answer extraction BIBREF0 , BIBREF1 finds a... | [
"Can their indexing-based method be applied to create other QA datasets in other domains, and not just Wikipedia?",
"Do they employ their indexing-based method to create a sample of a QA Wikipedia dataset?",
"How many question types do they find in the datasets analyzed?",
"How do they analyze contextual simi... | [
[
""
],
[
"",
""
],
[
"",
"7"
],
[
"They compare the tasks that the datasets are suitable for, average number of answer candidates per question, number of token types, average answer candidate lengths, average question lengths, question-answer word overlap."
]
] |
Cyberbullying has been defined by the National Crime Prevention Council as the use of the Internet, cell phones or other devices to send or post text or images intended to hurt or embarrass another person. Various studies have estimated that between to 10% to 40% of internet users are victims of cyberbullying BIBREF0 .... | [
"What were their performance results?",
"What cyberbulling topics did they address?"
] | [
[
"best model achieves 0.94 F1 score for Wikipedia and Twitter datasets and 0.95 F1 on Formspring dataset"
],
[
"",
""
]
] |
The automatic identification, extraction and representation of the information conveyed in texts is a key task nowadays. In fact, this research topic is increasing its relevance with the exponential growth of social networks and the need to have tools that are able to automatically process them BIBREF0.Some of the doma... | [
"Were any of the pipeline components based on deep learning models?",
"How is the effectiveness of this pipeline approach evaluated?"
] | [
[
"",
""
],
[
""
]
] |
Prepositional Phrase (PP) attachment disambiguation is an important problem in NLP, for it often gives rise to incorrect parse trees . Statistical parsers often predict incorrect attachment for prepositional phrases. For applications like Machine Translation, incorrect PP-attachment leads to serious errors in translati... | [
"What is the size of the parallel corpus used to train the model constraints?",
"How does enforcing agreement between parse trees work across different languages?"
] | [
[
"",
""
],
[
""
]
] |
Topic identification (topic ID) on speech aims to identify the topic(s) for given speech recordings, referred to as spoken documents, where the topics are a predefined set of classes or labels. This task is typically formulated as a three-step process. First, speech is tokenized into words or phones by automatic speech... | [
"What datasets are used to assess the performance of the system?",
"How is the vocabulary of word-like or phoneme-like units automatically discovered?"
] | [
[
"",
"LORELEI datasets of Uzbek, Mandarin and Turkish"
],
[
""
]
] |
The availability of massive electronic health records (EHR) data and the advances of deep learning technologies have provided unprecedented resource and opportunity for predictive healthcare, including the computational medication recommendation task. A number of deep learning models were proposed to assist doctors in ... | [
"IS the graph representation supervised?",
"Is the G-BERT model useful beyond the task considered?"
] | [
[
"The graph representation appears to be semi-supervised. It is included in the learning pipeline for the medical recommendation, where the attention model is learned. (There is some additional evidence that is unavailable in parsed text)"
],
[
"There is nothing specific about the approach that depends... |
The task of interpreting and following natural language (NL) navigation instructions involves interleaving different signals, at the very least the linguistic utterance and the representation of the world. For example, in turn right on the first intersection, the instruction needs to be interpreted, and a specific obje... | [
"How well did the baseline perform?",
"What is the baseline?"
] | [
[
""
],
[
"",
""
]
] |
Neural machine translation (NMT) BIBREF0 , BIBREF1 is widely applied for machine translation (MT) in recent years and focuses on popular language pairs such as English INLINEFORM0 French, English INLINEFORM1 German, English INLINEFORM2 Chinese or English INLINEFORM3 Japanese. NMT has obtained state-of-the-art performan... | [
"what methods were used to reduce data sparsity effects?",
"what was the baseline?",
"did they collect their own data?",
"what japanese-vietnamese dataset do they use?"
] | [
[
"",
""
],
[
"",
""
],
[
""
],
[
""
]
] |
Sequence-to-sequence (seq2seq) transformations have recently proven to be a successful framework for several natural language processing tasks, like: machine translation (MT) BIBREF0 , BIBREF1 , speech recognition BIBREF2 , speech synthesis BIBREF3 , natural language inference BIBREF4 and others. However, the success o... | [
"How do they measure style transfer success?",
"Do they introduce errors in the data or does the data already contain them?",
"What error types is their model more reliable for?",
"How does their parallel data differ in terms of style?"
] | [
[
""
],
[
"",
"Data already contain errors"
],
[
"",
""
],
[
""
]
] |
Many machine learning models in question answering tasks often involve matching mechanism. For example, in factoid question answering such as SQuAD BIBREF1 , one needs to match between query and corpus in order to find out the most possible fragment as answer. In multiple choice question answering, such as MC Test BIBR... | [
"How do they split text to obtain sentence levels?",
"Do they experiment with their proposed model on any other dataset other than MovieQA?"
] | [
[
""
],
[
"",
""
]
] |
Natural Language Generation (NLG) plays a critical role in Spoken Dialogue Systems (SDS) with task is to convert a meaning representation produced by the Dialogue Manager into natural language utterances. Conventional approaches still rely on comprehensive hand-tuning templates and rules requiring expert knowledge of l... | [
"What is the difference of the proposed model with a standard RNN encoder-decoder?",
"Does the model evaluated on NLG datasets or dialog datasets?"
] | [
[
"Introduce a \"Refinement Adjustment LSTM-based component\" to the decoder"
],
[
"NLG datasets",
"NLG datasets"
]
] |
Learning the distributed representation for long spans of text from its constituents has been a key step for various natural language processing (NLP) tasks, such as text classification BIBREF0 , BIBREF1 , semantic matching BIBREF2 , BIBREF3 , and machine translation BIBREF4 . Existing deep learning approaches take a c... | [
"What tasks do they experiment with?",
"What is the meta knowledge specifically?"
] | [
[
"",
""
],
[
""
]
] |
Singing is an important way of human expression and the techniques of singing synthesis have broad applications in different prospects including virtual human, movie dubbing and so on. Traditional singing synthesize systems are based on concatenative BIBREF1 or HMM BIBREF2 based approaches. With the success of deep lea... | [
"Are there elements, other than pitch, that can potentially result in out of key converted singing?",
"How is the quality of singing voice measured?"
] | [
[
""
],
[
"",
"Automatic: Normalized cross correlation (NCC)\nManual: Mean Opinion Score (MOS)"
]
] |
Long short term memory (LSTM) units BIBREF1 are popular for many sequence modeling tasks and are used extensively in language modeling. A key to their success is their articulated gating structure, which allows for more control over the information passed along the recurrence. However, despite the sophistication of the... | [
"what data did they use?",
"what previous RNN models do they compare with?"
] | [
[
"",
""
],
[
"Variational LSTM, CharCNN, Pointer Sentinel-LSTM, RHN, NAS Cell, SRU, QRNN, RAN, 4-layer skip-connection LSTM, AWD-LSTM, Quantized LSTM"
]
] |
While most NLP resources are English-specific, there have been several recent efforts to build multilingual benchmarks. One possibility is to collect and annotate data in multiple languages separately BIBREF0, but most existing datasets have been created through translation BIBREF1, BIBREF2. This approach has two desir... | [
"What are examples of these artificats?",
"What are the languages they use in their experiment?",
"Does the professional translation or the machine translation introduce the artifacts?",
"Do they recommend translating the premise and hypothesis together?",
"Is the improvement over state-of-the-art statistic... | [
[
"",
""
],
[
"English\nFrench\nSpanish\nGerman\nGreek\nBulgarian\nRussian\nTurkish\nArabic\nVietnamese\nThai\nChinese\nHindi\nSwahili\nUrdu\nFinnish",
""
],
[
""
],
[
"",
""
],
[
""
],
[
""
],
[
""
]
] |
Assembling training corpora of annotated natural language examples in specialized domains such as biomedicine poses considerable challenges. Experts with the requisite domain knowledge to perform high-quality annotation tend to be expensive, while lay annotators may not have the necessary knowledge to provide high-qual... | [
"How much higher quality is the resulting annotated data?",
"How do they match annotators to instances?",
"How much data is needed to train the task-specific encoder?",
"What kind of out-of-domain data?",
"Is an instance a sentence or an IE tuple?"
] | [
[
""
],
[
"Annotations from experts are used if they have already been collected."
],
[
"57,505 sentences",
"57,505 sentences"
],
[
"",
""
],
[
""
]
] |
As social media, specially Twitter, takes on an influential role in presidential elections in the U.S., natural language processing of political tweets BIBREF0 has the potential to help with nowcasting and forecasting of election results as well as identifying the main issues with a candidate – tasks of much interest t... | [
"Who are the crowdworkers?",
"Which toolkits do they use?",
"Which sentiment class is the most accurately predicted by ELS systems?",
"Is datasets for sentiment analysis balanced?",
"What measures are used for evaluation?"
] | [
[
"people in the US that use Amazon Mechanical Turk",
""
],
[
"",
""
],
[
"neutral sentiment"
],
[
""
],
[
""
]
] |
Emotion detection has long been a topic of interest to scholars in natural language processing (NLP) domain. Researchers aim to recognize the emotion behind the text and distribute similar ones into the same group. Establishing an emotion classifier can not only understand each user's feeling but also be extended to va... | [
"what were the baselines?",
"what datasets were used?",
"What BERT models are used?",
"What are the sources of the datasets?",
"What labels does the dataset have?"
] | [
[
"BOW-LR, BOW-RF. TFIDF-RF, TextCNN, C-TextCNN",
""
],
[
"",
""
],
[
"BERT-base, BERT-large, BERT-uncased, BERT-cased"
],
[
""
],
[
""
]
] |
Word embeddings BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 are unsupervised learning methods for capturing latent semantic structure in language. Word embedding methods analyze text data to learn distributed representations of the vocabulary that capture its co-occurrence statistics. These repr... | [
"Do they evaluate on English only datasets?",
"What experiments are used to demonstrate the benefits of this approach?",
"What hierarchical modelling approach is used?",
"How do co-purchase patterns vary across seasons?",
"Which words are used differently across ArXiv?"
] | [
[
"",
""
],
[
"",
"Calculate test log-likelihood on the three considered datasets"
],
[
""
],
[
""
],
[
""
]
] |
Headline generation is the process of creating a headline-style sentence given an input article. The research community has been regarding the task of headline generation as a summarization task BIBREF1, ignoring the fundamental differences between headlines and summaries. While summaries aim to contain most of the imp... | [
"What is future work planed?",
"What is this method improvement over the best performing state-of-the-art?",
"Which baselines are used for evaluation?",
"Did they used dataset from another domain for evaluation?",
"How is sensationalism scorer trained?"
] | [
[
""
],
[
""
],
[
""
],
[
"",
""
],
[
"",
""
]
] |
The concept of message passing over graphs has been around for many years BIBREF0, BIBREF1, as well as that of graph neural networks (GNNs) BIBREF2, BIBREF3. However, GNNs have only recently started to be closely investigated, following the advent of deep learning. Some notable examples include BIBREF4, BIBREF5, BIBREF... | [
"Which component is the least impactful?",
"Which component has the greatest impact on performance?",
"What is the state-of-the-art system?",
"Which datasets are used?",
"What is the message passing framework?"
] | [
[
"Based on table results provided changing directed to undirected edges had least impact - max abs difference of 0.33 points on all three datasets."
],
[
"Increasing number of message passing iterations showed consistent improvement in performance - around 1 point improvement compared between 1 and 4 i... |
Sarcasm is an intensive, indirect and complex construct that is often intended to express contempt or ridicule . Sarcasm, in speech, is multi-modal, involving tone, body-language and gestures along with linguistic artifacts used in speech. Sarcasm in text, on the other hand, is more restrictive when it comes to such no... | [
"What other evaluation metrics are looked at?",
"What is the best reported system?",
"What kind of stylistic features are obtained?",
"What traditional linguistics features did they use?",
"What cognitive features are used?"
] | [
[
"",
""
],
[
"Gaze Sarcasm using Multi Instance Logistic Regression.",
""
],
[
""
],
[
""
],
[
"Readability (RED), Number of Words (LEN), Avg. Fixation Duration (FDUR), Avg. Fixation Count (FC), Avg. Saccade Length (SL), Regression Count (REG), Skip count (SKIP), Count ... |
In June 2015, the operators of the online discussion site Reddit banned several communities under new anti-harassment rules. BIBREF0 used this opportunity to combine rich online data with computational methods to study a current question: Does eliminating these “echo chambers” diminish the amount of hate speech overall... | [
"What approaches do they use towards text analysis?",
"What dataset do they use for analysis?",
"Do they demonstrate why interdisciplinary insights are important?",
"What background do they have?",
"What kind of issues (that are not on the forefront of computational text analysis) do they tackle?"
] | [
[
"",
"Modeling considerations: the variables (both predictors and outcomes) are rarely simply binary or categorical; using a particular classification scheme means deciding which variations are visible,; Supervised and unsupervised learning are the most common approaches to learning from data; the unit... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.