id stringlengths 10 10 | title stringlengths 19 145 | abstract stringlengths 273 1.91k | full_text dict | qas dict | figures_and_tables dict | question list | retrieval_gt list | answer_gt list | __index_level_0__ int64 0 887 |
|---|---|---|---|---|---|---|---|---|---|
1705.07368 | Mixed Membership Word Embeddings for Computational Social Science | Word embeddings improve the performance of NLP systems by revealing the hidden structural relationships between words. Despite their success in many applications, word embeddings have seen very little use in computational social science NLP tasks, presumably due to their reliance on big data, and to a lack of interpret... | {
"paragraphs": [
[
"Word embedding models, which learn to encode dictionary words with vector space representations, have been shown to be valuable for a variety of natural language processing (NLP) tasks such as statistical machine translation BIBREF2 , part-of-speech tagging, chunking, and named entity r... | {
"answers": [
{
"annotation_id": [
"638a79523ddc482d96be422ad091c20c92ccf7d9"
],
"answer": [
{
"evidence": [
"In this paper, I introduce an interpretable word embedding model, and an associated topic model, which are designed to work well when trained on a ... | {
"caption": [
"Table 1: Most similar words to “learning,” based on word embeddings trained on NIPS articles, and on the large generic Google News corpus (Mikolov et al., 2013a,b).",
"Table 2: “Generative” models. Identifying the skip-gram (top-left)’s word distributions with topics yields analogous topic mod... | [
"Why is big data not appropriate for this task?",
"What is an example of a computational social science NLP task?"
] | [
[
"1705.07368-Introduction-1",
"1705.07368-Conclusion-0"
],
[
"1705.07368-Computational Social Science Case Studies: State of the Union and NIPS-0"
]
] | [
"Training embeddings from small-corpora can increase the performance of some tasks",
"Visualization of State of the union addresses"
] | 383 |
2001.05970 | #MeToo on Campus: Studying College Sexual Assault at Scale Using Data Reported on Social Media | Recently, the emergence of the #MeToo trend on social media has empowered thousands of people to share their own sexual harassment experiences. This viral trend, in conjunction with the massive personal information and content available on Twitter, presents a promising opportunity to extract data driven insights to com... | {
"paragraphs": [
[
"Sexual harassment is defined as \"bullying or coercion of a sexual nature, or the unwelcome or inappropriate promise of rewards in exchange for sexual favors.\" In fact, it is an ongoing problem in the U.S., especially within the higher education community. According to the National Sex... | {
"answers": [
{
"annotation_id": [
"5a280f6f2ee2fb369c1b5ff5a59c638763efefd6"
],
"answer": [
{
"evidence": [
"We examine other features regarding the characteristics of the studied colleges, which might be significant factors of sexual harassment. Four fact... | {
"caption": [
"Figure 1: The meaning representation of the example sentence ”He harassed me.” in TRIPS LF, the Ontology types of the words are indicated by ”:*” and the role-argument relations between them are denoted by named arcs.",
"Table 1: Top 5 topics from all #MeToo Tweets from 51,104 college follower... | [
"Which major geographical regions are studied?",
"How strong is the correlation between the prevalence of the #MeToo movement and official reports [of sexual harassment]?",
"How are the topics embedded in the #MeToo tweets extracted?",
"Which geographical regions correlate to the trend?"
] | [
[
"2001.05970-Methodology ::: Regression Analysis-0"
],
[
"2001.05970-Methodology ::: Regression Analysis-0",
"2001.05970-4-Table2-1.png"
],
[
"2001.05970-Methodology ::: Topic Modeling on #MeToo Tweets-0"
],
[
"2001.05970-Experimental Results ::: Regression Result-0"
]
] | [
"Northeast U.S, South U.S., West U.S. and Midwest U.S.",
"0.9098 correlation",
"Using Latent Dirichlet Allocation on TF-IDF transformed from the corpus",
"Northeast U.S., West U.S. and South U.S."
] | 386 |
1706.04815 | S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension | In this paper, we present a novel approach to machine reading comprehension for the MS-MARCO dataset. Unlike the SQuAD dataset that aims to answer a question with exact text spans in a passage, the MS-MARCO dataset defines the task as answering a question from multiple passages and the words in the answer are not neces... | {
"paragraphs": [
[
"Machine reading comprehension BIBREF0 , BIBREF1 , which attempts to enable machines to answer questions after reading a passage or a set of passages, attracts great attentions from both research and industry communities in recent years. The release of the Stanford Question Answering Dat... | {
"answers": [
{
"annotation_id": [
"7435bfa311d0cdff749d83b5936b0217ae68141e"
],
"answer": [
{
"evidence": [
"In this paper, we present an extraction-then-synthesis framework for machine reading comprehension shown in Figure 1 , in which the answer is synth... | {
"caption": [
"Figure 1: Overview of S-Net. It first extracts evidence snippets by matching the question and passage, and then generates the answer by synthesizing the question, passage, and evidence snippets.",
"Figure 2: Evidence Extraction Model",
"Figure 3: Answer Synthesis Model",
"Table 2: The ... | [
"What two components are included in their proposed framework?"
] | [
[
"1706.04815-Introduction-3"
]
] | [
"evidence extraction and answer synthesis"
] | 387 |
1903.07398 | Deep Text-to-Speech System with Seq2Seq Model | Recent trends in neural network based text-to-speech/speech synthesis pipelines have employed recurrent Seq2seq architectures that can synthesize realistic sounding speech directly from text characters. These systems however have complex architectures and takes a substantial amount of time to train. We introduce severa... | {
"paragraphs": [
[
"Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models. The complexity of the TTS problem coupled with the requirement for deep domain expertise means thes... | {
"answers": [
{
"annotation_id": [
"4c53d0802b646168c6b7dc17242d7d69944205b9"
],
"answer": [
{
"evidence": [
"Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acous... | {
"caption": [
"Figure 1: Overall architecture of our Seq2Seq model for neural text-to-speech. Note that inputs, encoder, decoder and attention are labelled different colors.",
"Figure 2: Attention guide mask. Note that bright area has larger values and dark area has small values.",
"Figure 3: Attention a... | [
"Which modifications do they make to well-established Seq2seq architectures?"
] | [
[
"1903.07398-Related Work-0",
"1903.07398-Introduction-0",
"1903.07398-Guided Attention Mask-0",
"1903.07398-Changes to Attention Mechanism-1",
"1903.07398-Model Overview-0",
"1903.07398-Changes to Attention Mechanism-0"
]
] | [
"Replacing attention mechanism to query-key attention, and adding a loss to make the attention mask as diagonal as possible"
] | 390 |
1710.06700 | Build Fast and Accurate Lemmatization for Arabic | In this paper we describe the complexity of building a lemmatizer for Arabic which has a rich and complex derivational morphology, and we discuss the need for a fast and accurate lammatization to enhance Arabic Information Retrieval (IR) results. We also introduce a new data set that can be used to test lemmatization a... | {
"paragraphs": [
[
"Lemmatization is the process of finding the base form (or lemma) of a word by considering its inflected forms. Lemma is also called dictionary form, or citation form, and it refers to all words having the same meaning.",
"Lemmatization is an important preprocessing step for many a... | {
"answers": [
{
"annotation_id": [
"6f8264da377ead4925f48d0637888203343381c7"
],
"answer": [
{
"evidence": [
"In terms of speed, our system was able to lemmatize 7.4 million words on a personal laptop in almost 2 minutes compared to 2.5 hours for MADAMIRA, ... | {
"caption": [
"Table 1: Examples of complex verb lemmatization cases",
"Table 2: Examples of complex noun lemmatization cases",
"Figure 2: Buckwalter analysis (diacritization forms and lemmas are highlighted)",
"Figure 1: Lemmatization of WikiNews corpus",
"Table 3: Lemmatization accuracy using W... | [
"How was speed measured?",
"What were their accuracy results on the task?"
] | [
[
"1710.06700-Evaluation-2"
],
[
"1710.06700-4-Table3-1.png"
]
] | [
"how long it takes the system to lemmatize a set number of words",
"97.32%"
] | 392 |
1709.08299 | Dataset for the First Evaluation on Chinese Machine Reading Comprehension | Machine Reading Comprehension (MRC) has become enormously popular recently and has attracted a lot of attention. However, existing reading comprehension datasets are mostly in English. To add diversity in reading comprehension datasets, in this paper we propose a new Chinese reading comprehension dataset for accelerati... | {
"paragraphs": [
[
"Machine Reading Comprehension (MRC) has become enormously popular in recent research, which aims to teach the machine to comprehend human languages and answer the questions based on the reading materials. Among various reading comprehension tasks, the cloze-style reaing comprehension is... | {
"answers": [
{
"annotation_id": [
"82716e753629b11f8be6efd94942b8685e0213f4"
],
"answer": [
{
"evidence": [
"Cloze Track: In this track, the participants are required to use the large-scale training data to train their cloze system and evaluate on the cloz... | {
"caption": [
"Table 1: Statistics of the dataset for the 1st Evaluation on Chinese Machine Reading Comprehension (CMRC-2017).",
"Figure 1: Examples of the proposed datasets (the English translation is in grey). The sentence ID is depicted at the beginning of each row. In the Cloze Track, “XXXXX” represents ... | [
"What two types the Chinese reading comprehension dataset consists of?",
"For which languages most of the existing MRC datasets are created?"
] | [
[
"1709.08299-The 1st Evaluation on Chinese Machine Reading Comprehension (CMRC-2017)-1",
"1709.08299-The 1st Evaluation on Chinese Machine Reading Comprehension (CMRC-2017)-2"
],
[
"1709.08299-People Daily & Children's Fairy Tale-0"
]
] | [
"cloze-style reading comprehension and user query reading comprehension questions",
"English"
] | 397 |
1909.08167 | Weighed Domain-Invariant Representation Learning for Cross-domain Sentiment Analysis | Cross-domain sentiment analysis is currently a hot topic in the research and engineering areas. One of the most popular frameworks in this field is the domain-invariant representation learning (DIRL) paradigm, which aims to learn a distribution-invariant feature representation across domains. However, in this work, we ... | {
"paragraphs": [
[
"Sentiment analysis aims to predict sentiment polarity of user-generated data with emotional orientation like movie reviews. The exponentially increase of online reviews makes it an interesting topic in research and industrial areas. However, reviews can span so many different domains an... | {
"answers": [
{
"annotation_id": [
"2b35c166a73e4a7268e267e0831a9ce47f3a865b"
],
"answer": [
{
"evidence": [
"According to the above analysis, we proposed a weighted version of DIRL to address the problem caused by the shift of $\\rm {P}(\\rm {Y})$ to DIRL.... | {
"caption": [
"Table 1: Mean accuracy ± standard deviation over five runs on the 12 binary-class cross-domain tasks.",
"Figure 1: Mean accuracy of WCMD†† over different initialization of w. The empirical optimum value of w makes w1PS(Y = 1) = 0.75. The dot line in the same color denotes performance of the CM... | [
"Which sentiment analysis tasks are addressed?"
] | [
[
"1909.08167-Experiment ::: Dataset and Task Design ::: Multi-Class.-0",
"1909.08167-Experiment ::: Dataset and Task Design-0",
"1909.08167-Experiment ::: Dataset and Task Design ::: Binary-Class.-0"
]
] | [
"12 binary-class classification and multi-class classification of reviews based on rating"
] | 400 |
1911.03562 | The State of NLP Literature: A Diachronic Analysis of the ACL Anthology | The ACL Anthology (AA) is a digital repository of tens of thousands of articles on Natural Language Processing (NLP). This paper examines the literature as a whole to identify broad trends in productivity, focus, and impact. It presents the analyses in a sequence of questions and answers. The goal is to record the stat... | {
"paragraphs": [
[
"The ACL Anthology (AA) is a digital repository of tens of thousands of articles on Natural Language Processing (NLP) / Computational Linguistics (CL). It includes papers published in the family of ACL conferences as well as in other NLP conferences such as LREC and RANLP. AA is the larg... | {
"answers": [
{
"annotation_id": [
"2b58e703bb70e489b5e660be7244333759ea1c28"
],
"answer": [
{
"evidence": [
"Discussion: Numbers for an additional 32 areas are available online. Observe that in only about 12% (7 of the top 59) of the most cited areas of re... | {
"caption": [
"Figure 1 The number of AA papers published in each of the years from 1965 to 2018.",
"Figure 2 The number of authors of AA papers from 1965 to 2018.",
"Figure 3 Number of AA papers by type.",
"Figure 4 The number of main conference papers for various venues and paper types (workshop pa... | [
"Which 3 NLP areas are cited the most?",
"Which journal and conference are cited the most in recent years?",
"Which 5 languages appear most frequently in AA paper titles?"
] | [
[
"1911.03562-29-Figure33-1.png"
],
[
"1911.03562-20-Figure25-1.png"
],
[
"1911.03562-10-Figure11-1.png"
]
] | [
"machine translation, statistical machine, sentiment analysis",
"CL Journal and EMNLP conference",
"English, Chinese, French, Japanese and Arabic"
] | 401 |
1912.10435 | BERTQA -- Attention on Steroids | In this work, we extend the Bidirectional Encoder Representations from Transformers (BERT) with an emphasis on directed coattention to obtain an improved F1 performance on the SQUAD2.0 dataset. The Transformer architecture on which BERT is based places hierarchical global attention on the concatenation of the context a... | {
"paragraphs": [
[
"Through this CS224N Pre-trained Contextual Embeddings (PCE) project, we tackle the question answering problem which is one of the most popular in NLP and has been brought to the forefront by datasets such as SQUAD 2.0. This problem's success stems from both the challenge it presents and... | {
"answers": [
{
"annotation_id": [
"2b838f331b408f376e6f0bf242ec8cc7c8841852"
],
"answer": [
{
"evidence": [
"We constructed the ensembled predictions by choosing the answer from the network that had the highest probability and choosing no answer if any of ... | {
"caption": [
"Figure 1: Proposed C2Q and Q2C directed coattention architecture",
"Figure 2: Convolutional Layers for Local Attention (in channels, out channels, kernel size)",
"Figure 3: Back Translation to augment the SQuAD dataset",
"Table 1: Model Configurations; BS = Batch Size, GA = Gradient Ac... | [
"How much F1 was improved after adding skip connections?"
] | [
[
"1912.10435-6-Table2-1.png",
"1912.10435-Results and Analysis-0"
]
] | [
"Simple Skip improves F1 from 74.34 to 74.81\nTransformer Skip improes F1 from 74.34 to 74.95 "
] | 402 |
1603.04513 | Multichannel Variable-Size Convolution for Sentence Classification | We propose MVCNN, a convolution neural network (CNN) architecture for sentence classification. It (i) combines diverse versions of pretrained word embeddings and (ii) extracts features of multigranular phrases with variable-size convolution filters. We also show that pretraining MVCNN is critical for good performance. ... | {
"paragraphs": [
[
"Different sentence classification tasks are crucial for many Natural Language Processing (NLP) applications. Natural language sentences have complicated structures, both sequential and hierarchical, that are essential for understanding them. In addition, how to decode and compose the fe... | {
"answers": [
{
"annotation_id": [
"5388e251489e27615d6b54e9b7771fff278d4b37"
],
"answer": [
{
"evidence": [
"Pretraining. Sentence classification systems are usually implemented as supervised training regimes where training loss is between true label distr... | {
"caption": [
"Figure 1: MVCNN: supervised classification and pretraining.",
"Table 1: Description of five versions of word embedding.",
"Table 2: Statistics of five embedding versions for four tasks. The first block with five rows provides the number of unknown words of each task when using correspondin... | [
"How much gain does the model achieve with pretraining MVCNN?"
] | [
[
"1603.04513-8-Table3-1.png"
]
] | [
"0.8 points on Binary; 0.7 points on Fine-Grained; 0.6 points on Senti140; 0.7 points on Subj"
] | 404 |
1607.06025 | Constructing a Natural Language Inference Dataset using Generative Neural Networks | Natural Language Inference is an important task for Natural Language Understanding. It is concerned with classifying the logical relation between two sentences. In this paper, we propose several text generative neural networks for generating text hypothesis, which allows construction of new Natural Language Inference d... | {
"paragraphs": [
[
"The challenge in Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), is to correctly decide whether a sentence (referred to as a premise) entails or contradicts or is neutral in respect to another sentence (a hypothesis). This classification task requir... | {
"answers": [
{
"annotation_id": [
"2cf0a303727b51c1d38502912a5b727cfce62ac0"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 4: The performance of classifiers trained on the original and generated datasets. The classifiers were tested on original tes... | {
"caption": [
"Table 1: Three NLI examples from SNLI.",
"Figure 1: Evaluation of NLI generative models. Note that both datasets are split on training test and validation sets.",
"Figure 2: NLI classification model",
"Figure 3: Generative models architecture. The rounded boxes represent trainable para... | [
"What is the highest accuracy score achieved?"
] | [
[
"1607.06025-15-Table4-1.png"
]
] | [
"82.0%"
] | 405 |
1909.04181 | BERT-Based Arabic Social Media Author Profiling | We report our models for detecting age, language variety, and gender from social media data in the context of the Arabic author profiling and deception detection shared task (APDA). We build simple models based on pre-trained bidirectional encoders from transformers (BERT). We first fine-tune the pre-trained BERT model... | {
"paragraphs": [
[
"The proliferation of social media has made it possible to collect user data in unprecedented ways. These data can come in the form of usage and behavior (e.g., who likes what on Facebook), network (e.g., who follows a given user on Instagram), and content (e.g., what people post to Twit... | {
"answers": [
{
"annotation_id": [
"5d53fcb4aec782f6e44f8f9c9654f7a112c00fe3"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no":... | {
"caption": [
"Table 1. Tweet level results on DEV",
"Table 2. Results of our submissions on official test data (user level)"
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png"
]
} | [
"What are the three datasets used in the paper?"
] | [
[
"1909.04181-Data-0"
]
] | [
"Data released for APDA shared task contains 3 datasets."
] | 406 |
1909.00252 | Humor Detection: A Transformer Gets the Last Laugh | Much previous work has been done in attempting to identify humor in text. In this paper we extend that capability by proposing a new task: assessing whether or not a joke is humorous. We present a novel way of approaching this problem by building a model that learns to identify humorous jokes based on ratings gleaned f... | {
"paragraphs": [
[
"Recent advances in natural language processing and neural network architecture have allowed for widespread application of these methods in Text Summarization BIBREF0, Natural Language Generation BIBREF1, and Text Classification BIBREF2. Such advances have enabled scientists to study com... | {
"answers": [
{
"annotation_id": [
"71d3ca59dc8457559c2e3457c62b41d2c30b5ab9"
],
"answer": [
{
"evidence": [
"In order to understand what may be happening in the model, we used the body and punchline only datasets to see what part of the joke was most impor... | {
"caption": [
"Table 1: Example format of the Reddit Jokes dataset",
"Table 2: Results of Accuracy on Reddit Jokes dataset",
"Figure 1: Transformer Model Architecture",
"Table 3: Comparison of Methods on Pun of the Day Dataset. HCF represents Human Centric Features, F for increasing the number of fil... | [
"What is improvement in accuracy for short Jokes in relation other types of jokes?"
] | [
[
"1909.00252-Experiments ::: Results-0",
"1909.00252-3-Table2-1.png",
"1909.00252-Experiments ::: Results-2",
"1909.00252-Experiments ::: Results-3",
"1909.00252-4-Table4-1.png",
"1909.00252-4-Table3-1.png"
]
] | [
"It had the highest accuracy comparing to all datasets 0.986% and It had the highest improvement comparing to previous methods on the same dataset by 8%"
] | 409 |
1808.09920 | Question Answering by Reasoning Across Documents with Graph Convolutional Networks | Most research in reading comprehension has focused on answering questions based on individual documents or even single paragraphs. We introduce a neural model which integrates and reasons relying on information spread within documents and across multiple documents. We frame it as an inference problem on a graph. Mentio... | {
"paragraphs": [
[
"The long-standing goal of natural language understanding is the development of systems which can acquire knowledge from text collections. Fresh interest in reading comprehension tasks was sparked by the availability of large-scale datasets, such as SQuAD BIBREF1 and CNN/Daily Mail BIBRE... | {
"answers": [
{
"annotation_id": [
"e24ec730b51654dd114621a19ddd11dfa3f0ae2a"
],
"answer": [
{
"evidence": [
"In this experiment, we compare our Enitity-GCN against recent prior work on the same task. We present test and development results (when present) f... | {
"caption": [
"Figure 1: A sample from WIKIHOP where multi-step reasoning and information combination from different documents is necessary to infer the correct answer.",
"Figure 2: Supporting documents (dashed ellipses) organized as a graph where nodes are mentions of either candidate entities or query enti... | [
"What baseline did they compare Entity-GCN to?",
"How did they get relations between mentions?",
"How did they detect entity mentions?",
"What performance does the Entity-GCN get on WIKIHOP?"
] | [
[
"1808.09920-Comparison-0"
],
[
"1808.09920-Reasoning on an entity graph-4"
],
[
"1808.09920-Reasoning on an entity graph-3",
"1808.09920-Reasoning on an entity graph-2",
"1808.09920-Reasoning on an entity graph-0",
"1808.09920-Reasoning on an entity graph-1"
],
[
"1808.0992... | [
"Human, FastQA, BiDAF, Coref-GRU, MHPGM, Weaver / Jenga, MHQA-GRN",
"Assign a value to the relation based on whether mentions occur in the same document, if mentions are identical, or if mentions are in the same coreference chain.",
"Exact matches to the entity string and predictions from a coreference resoluti... | 411 |
1809.01060 | The Effect of Context on Metaphor Paraphrase Aptness Judgments | We conduct two experiments to study the effect of context on metaphor paraphrase aptness judgments. The first is an AMT crowd source task in which speakers rank metaphor paraphrase candidate sentence pairs in short document contexts for paraphrase aptness. In the second we train a composite DNN to predict these human j... | {
"paragraphs": [
[
"A metaphor is a way of forcing the normal boundaries of a word's meaning in order to better express an experience, a concept or an idea. To a native speaker's ear some metaphors sound more conventional (like the usage of the words ear and sound in this sentence), others more original. T... | {
"answers": [
{
"annotation_id": [
"6726336c9aa75736f937d405bdddafff8050dea8"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": ... | {
"caption": [
"Figure 1: In-context and out-of-context mean ratings. Points above the broken diagonal line represent sentence pairs which received a higher rating when presented in context. The total least-square linear regression is shown as the second line.",
"Figure 2: DNN encoder for predicting metaphori... | [
"What document context was added?",
"What were the results of the first experiment?"
] | [
[
"1809.01060-Annotating Metaphor-Paraphrase Pairs in Contexts-4"
],
[
"1809.01060-MPAT Modelling Results-4"
]
] | [
"Preceding and following sentence of each metaphor and paraphrase are added as document context",
"Best performance achieved is 0.72 F1 score"
] | 414 |
1705.03261 | Drug-drug Interaction Extraction via Recurrent Neural Network with Multiple Attention Layers | Drug-drug interaction (DDI) is a vital information when physicians and pharmacists intend to co-administer two or more drugs. Thus, several DDI databases are constructed to avoid mistakenly combined use. In recent years, automatically extracting DDIs from biomedical text has drawn researchers' attention. However, the e... | {
"paragraphs": [
[
"Drug-drug interaction (DDI) is a situation when one drug increases or decreases the effect of another drug BIBREF0 . Adverse drug reactions may cause severe side effect, if two or more medicines were taken and their DDI were not investigated in detail. DDI is a common cause of illness, ... | {
"answers": [
{
"annotation_id": [
"a7063c73663ca3470b2f8c60c0a294efee32a10b"
],
"answer": [
{
"evidence": [
"The DDI corpus contains thousands of XML files, each of which are constructed by several records. For a sentence containing INLINEFORM0 drugs, ther... | {
"caption": [
"TABLE I THE DDI TYPES AND CORRESPONDING EXAMPLES",
"Fig. 1. Partial records in DDI corpus",
"Fig. 2. The bidirectional recurrent neural network with multiple attentions",
"Fig. 3. The Gated Recurrent Unit",
"Fig. 4. The objective function and F1 in the train process",
"Fig. 5. ... | [
"By how much does their model outperform existing methods?",
"What is the performance of their model?"
] | [
[
"1705.03261-Experimental Results-2"
],
[
"1705.03261-Experimental Results-2"
]
] | [
"Answer with content missing: (Table II) Proposed model has F1 score of 0.7220 compared to 0.7148 best state-state-of-the-art result.",
"Answer with content missing: (Table II) Proposed model has F1 score of 0.7220."
] | 415 |
2002.08899 | Compositional Neural Machine Translation by Removing the Lexicon from Syntax | The meaning of a natural language utterance is largely determined from its syntax and words. Additionally, there is evidence that humans process an utterance by separating knowledge about the lexicon from syntax knowledge. Theories from semantics and neuroscience claim that complete word meanings are not encoded in the... | {
"paragraphs": [
[
"Studies of Broca's and Wernicke's aphasia provide evidence that our brains understand an utterance by creating separate representations for word meaning and word arrangement BIBREF0. There is a related thesis about human language, present across many theories of semantics, which is that... | {
"answers": [
{
"annotation_id": [
"a1b30589d21ca744e1d2d49c2af15b3149c12399"
],
"answer": [
{
"evidence": [
"In a natural language translation setting, suppose that an input word corresponds to a set of output tokens independently of its context. Even thou... | {
"caption": [
"Figure 1: A graphic of our model. In addition to the terms described under our equations, we depict e terms which are embeddings for input tokens, for use by the LSTM encoder. The LSTM encoder is E, the LSTM decoder is D, the Lexical Unit is L and the Lexicon-Adversary Unit is LA. The dotted area ... | [
"How do they damage different neural modules?"
] | [
[
"2002.08899-4-Table2-1.png"
]
] | [
"Damage to neural modules is done by randomly initializing their weights, causing the loss of all learned information."
] | 416 |
1803.07771 | $\rho$-hot Lexicon Embedding-based Two-level LSTM for Sentiment Analysis | Sentiment analysis is a key component in various text mining applications. Numerous sentiment classification techniques, including conventional and deep learning-based methods, have been proposed in the literature. In most existing methods, a high-quality training set is assumed to be given. Nevertheless, constructing ... | {
"paragraphs": [
[
"Text is important in many artificial intelligence applications. Among various text mining techniques, sentiment analysis is a key component in applications such as public opinion monitoring and comparative analysis. Sentiment analysis can be divided into three problems according to inpu... | {
"answers": [
{
"annotation_id": [
"a3988736b95d4fd1f9b4de1cc7addeaf1b4c7752"
],
"answer": [
{
"evidence": [
"In this subsubsection, each of the three raw data sets (associated with their labels) shown in Table 1 is used. The clause data are not used. In ot... | {
"caption": [
"Figure 1. A lexicon-based approach for sentiment classification.",
"Figure 2. Two representative LSTM structures for text classification: a bi-directional LSTM (left), and a bi-directional LSTM with attention (right).",
"Figure 3. The proposed two-level LSTM network.",
"Figure 4. The h... | [
"How long are the datasets?",
"What are the sources of the data?",
"What is the new labeling strategy?"
] | [
[
"1803.07771-The Learning Procedure-13",
"1803.07771-The Learning Procedure-16",
"1803.07771-7-Table1-1.png"
],
[
"1803.07771-The Learning Procedure-13"
],
[
"1803.07771-Introduction-4"
]
] | [
"Travel dataset contains 4100 raw samples, 11291 clauses, Hotel dataset contains 3825 raw samples, 11264 clauses, and the Mobile dataset contains 3483 raw samples and 8118 clauses",
"User reviews written in Chinese collected online for hotel, mobile phone, and travel domains",
"They use a two-stage labeling str... | 418 |
1907.05403 | Incrementalizing RASA's Open-Source Natural Language Understanding Pipeline | As spoken dialogue systems and chatbots are gaining more widespread adoption, commercial and open-sourced services for natural language understanding are emerging. In this paper, we explain how we altered the open-source RASA natural language understanding pipeline to process incrementally (i.e., word-by-word), followi... | {
"paragraphs": [
[
"There is no shortage of services that are marketed as natural language understanding (nlu) solutions for use in chatbots, digital personal assistants, or spoken dialogue systems (sds). Recently, Braun2017 systematically evaluated several such services, including Microsoft LUIS, IBM Wats... | {
"answers": [
{
"annotation_id": [
"2f6dbdd7c8cb2cd735a26a2b03eb344ab650cdb9"
],
"answer": [
{
"evidence": [
"To evaluate the performance of our approach, we used a subset of the SNIPS BIBREF12 dataset, which is readily available in RASA nlu format. Our tra... | {
"caption": [
"Figure 1: The lifecycle of RASA components (from https://rasa.com/docs/nlu/)",
"Table 1: Test data results of non-incremental TensorFlow, restart-incremental TensorFlow, non-incremental SIUM, and update-incremental SIUM."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png"
]
} | [
"How are their changes evaluated?"
] | [
[
"1907.05403-Data, Task, Metrics-0",
"1907.05403-Data, Task, Metrics-1"
]
] | [
"The changes are evaluated based on accuracy of intent and entity recognition on SNIPS dataset"
] | 420 |
1707.06945 | Cross-Lingual Induction and Transfer of Verb Classes Based on Word Vector Space Specialisation | Existing approaches to automatic VerbNet-style verb classification are heavily dependent on feature engineering and therefore limited to languages with mature NLP pipelines. In this work, we propose a novel cross-lingual transfer method for inducing VerbNets for multiple languages. To the best of our knowledge, this is... | {
"paragraphs": [
[
"Playing a key role in conveying the meaning of a sentence, verbs are famously complex. They display a wide range of syntactic-semantic behaviour, expressing the semantics of an event as well as relational information among its participants BIBREF0 , BIBREF1 , BIBREF2 .",
"Lexical ... | {
"answers": [
{
"annotation_id": [
"8d8ac5e2871d148fadde8640840d62386779dea8"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": ... | {
"caption": [
"Figure 1: Transferring VerbNet information from a resource-rich to a resource-lean language through a word vector space: an English→ French toy example. Representations of words described by two types of ATTRACT constraints are pulled closer together in the joint vector space. (1) Monolingual pair... | [
"What are the six target languages?"
] | [
[
"1707.06945-Clustering Algorithm-0"
]
] | [
"Answer with content missing: (3 Experimental Setup) We experiment with six target languages: French (FR), Brazilian Portuguese (PT), Italian (IT), Polish (PL), Croatian (HR), and Finnish (FI)."
] | 421 |
1802.05574 | Open Information Extraction on Scientific Text: An Evaluation | Open Information Extraction (OIE) is the task of the unsupervised creation of structured information from text. OIE is often used as a starting point for a number of downstream tasks including knowledge base construction, relation extraction, and question answering. While OIE methods are targeted at being domain indepe... | {
"paragraphs": [
[
" This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/",
"The scientific literature is growing at a rapid rate BIBREF0 . To make sense of this flood of literature, for example, to extract ... | {
"answers": [
{
"annotation_id": [
"3003cf1d33c198e21640ab6bc76664481925b8a2"
],
"answer": [
{
"evidence": [
"We used two different data sources in our evaluation. The first dataset (WIKI) was the same set of 200 sentences from Wikipedia used in BIBREF7 . T... | {
"caption": [
"Table 1: Examples of difficult to judge triples and their associated sentences.",
"Table 2: Results of triples extracted from the SCI and WIKI corpora using the Open IE and MinIE tools.",
"Figure 1: Precision at various agreement levels. Agreement levels are shown as the proportion of over... | [
"What is the size of the released dataset?",
"Which OpenIE systems were used?"
] | [
[
"1802.05574-Datasets-1",
"1802.05574-Datasets-0",
"1802.05574-Judgement Data and Inter-Annotator Agreement-0",
"1802.05574-Annotation Process-0"
],
[
"1802.05574-Systems-0"
]
] | [
"440 sentences, 2247 triples extracted from those sentences, and 11262 judgements on those triples.",
"OpenIE4 and MiniIE"
] | 424 |
1705.00108 | Semi-supervised sequence tagging with bidirectional language models | Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks. However, in most cases, the recurrent network that operates on word-level representations to produce context sensitive representations is trained on relatively little labeled data. In ... | {
"paragraphs": [
[
"Due to their simplicity and efficacy, pre-trained word embedding have become ubiquitous in NLP systems. Many prior studies have shown that they capture useful semantic and syntactic information BIBREF0 , BIBREF1 and including them in NLP systems has been shown to be enormously helpful f... | {
"answers": [
{
"annotation_id": [
"5113446c5057085c6fda63fc2324d5788a26a8b6"
],
"answer": [
{
"evidence": [
"In our final system, after pre-training the forward and backward LMs separately, we remove the top layer softmax and concatenate the forward and ba... | {
"caption": [
"Figure 1: The main components in TagLM, our language-model-augmented sequence tagging system. The language model component (in orange) is used to augment the input token representation in a traditional sequence tagging models (in grey).",
"Figure 2: Overview of TagLM, our language model augmen... | [
"how are the bidirectional lms obtained?",
"what metrics are used in evaluation?",
"what results do they achieve?",
"what previous systems were compared to?"
] | [
[
"1705.00108-Bidirectional LM-4"
],
[
"1705.00108-Experiments-0",
"1705.00108-5-Table1-1.png"
],
[
"1705.00108-Introduction-4"
],
[
"1705.00108-5-Table2-1.png",
"1705.00108-5-Table1-1.png",
"1705.00108-6-Table3-1.png",
"1705.00108-Overall system results-0",
"1705.001... | [
"They pre-train forward and backward LMs separately, remove top layer softmax, and concatenate to obtain the bidirectional LMs.",
"micro-averaged F1",
"91.93% F1 score on CoNLL 2003 NER task and 96.37% F1 score on CoNLL 2000 Chunking task",
"Chiu and Nichols (2016), Lample et al. (2016), Ma and Hovy (2016), Y... | 426 |
2002.06053 | Exploring Chemical Space using Natural Language Processing Methodologies for Drug Discovery | Text-based representations of chemicals and proteins can be thought of as unstructured languages codified by humans to describe domain-specific knowledge. Advances in natural language processing (NLP) methodologies in the processing of spoken languages accelerated the application of NLP to elucidate hidden knowledge in... | {
"paragraphs": [
[
"The design and discovery of novel drugs for protein targets is powered by an understanding of the underlying principles of protein-compound interaction. Biochemical methods that measure affinity and biophysical methods that describe the interaction in atomistic level detail have provide... | {
"answers": [
{
"annotation_id": [
"3038e61acca2a4af92e033f6aa6f00b2cdc1f3a5"
],
"answer": [
{
"evidence": [
"Despite the focus on sharing datasets and source codes on popular software development platforms such as GitHub (github.com) or Zenodo (zenodo.org)... | {
"caption": [
"Figure 1: The illustration of the Skip-Gram architecture of the Word2Vec algorithm. For a vocabulary of size V, each word in the vocabulary is described as a one-hot encoded vector (a binary vector in which only the corresponding word position is set to 1). The Skip-Gram architecture is a simple o... | [
"Are this models usually semi/supervised or unsupervised?"
] | [
[
"2002.06053-Biochemical Language Processing ::: Text generation-3",
"2002.06053-Biochemical Language Processing ::: Text generation-4",
"2002.06053-Biochemical Language Processing ::: Text generation ::: Machine Translation-0"
]
] | [
"Both supervised and unsupervised, depending on the task that needs to be solved."
] | 427 |
1712.03547 | Inducing Interpretability in Knowledge Graph Embeddings | We study the problem of inducing interpretability in KG embeddings. Specifically, we explore the Universal Schema (Riedel et al., 2013) and propose a method to induce interpretability. There have been many vector space models proposed for the problem, however, most of these methods don't address the interpretability (s... | {
"paragraphs": [
[
"Knowledge Graphs such as Freebase, WordNet etc. have become important resources for supporting many AI applications like web search, Q&A etc. They store a collection of facts in the form of a graph. The nodes in the graph represent real world entities such as Roger Federer, Tennis, Unit... | {
"answers": [
{
"annotation_id": [
"bce54aa89e2f59080e2bf3c6ca440458d73863b0"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no":... | {
"caption": [
"Table 1: Results on test data. The proposed method significantly improves interpretability while maintaining comparable performance on KG tasks (Section 4.3).",
"Table 2: Top 5 and bottom most entities for randomly selected dimensions. As we see, the proposed method produces more coherent enti... | [
"When they say \"comparable performance\", how much of a performance drop do these new embeddings result in?"
] | [
[
"1712.03547-4-Table1-1.png"
]
] | [
"Performance was comparable, with the proposed method quite close and sometimes exceeding performance of baseline method."
] | 428 |
1908.07218 | CA-EHN: Commonsense Word Analogy from E-HowNet | Word analogy tasks have tended to be handcrafted, involving permutations of hundreds of words with dozens of relations, mostly morphological relations and named entities. Here, we propose modeling commonsense knowledge down to word-level analogical reasoning. We present CA-EHN, the first commonsense word analogy datase... | {
"paragraphs": [
[
"Commonsense reasoning is fundamental for natural language agents to generalize inference beyond their training corpora. Although the natural language inference (NLI) task BIBREF0 , BIBREF1 has proved a good pre-training objective for sentence representations BIBREF2 , commonsense covera... | {
"answers": [
{
"annotation_id": [
"30b4603a79367cc4c6f73a0eff7717a0033f2885"
],
"answer": [
{
"evidence": [
"We trained word embeddings using either GloVe BIBREF11 or SGNS BIBREF12 on a small or a large corpus. The small corpus consists of the traditional ... | {
"caption": [
"Table 1: E-HowNet lexicon",
"Figure 1: E-HowNet taxonomy",
"Table 2: Commonsense analogy",
"Figure 2: Commonsense analogy extraction",
"Table 3: Analogy benchmarks",
"Table 4: Embedding performance"
],
"file": [
"2-Table1-1.png",
"2-Figure1-1.png",
"3-Table2-1.p... | [
"What types of word representations are they evaluating?"
] | [
[
"1908.07218-Word Embeddings-0"
]
] | [
"GloVE; SGNS"
] | 429 |
1707.05853 | Encoding Word Confusion Networks with Recurrent Neural Networks for Dialog State Tracking | This paper presents our novel method to encode word confusion networks, which can represent a rich hypothesis space of automatic speech recognition systems, via recurrent neural networks. We demonstrate the utility of our approach for the task of dialog state tracking in spoken dialog systems that relies on automatic s... | {
"paragraphs": [
[
"Spoken dialog systems (SDSs) allow users to naturally interact with machines through speech and are nowadays an important research direction, especially with the great success of automatic speech recognition (ASR) systems BIBREF0 , BIBREF1 . SDSs can be designed for generic purposes, e.... | {
"answers": [
{
"annotation_id": [
"312af031a42cf3f12ee68c1f1f1beeb08bd8324f"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 1: The proposed model with GRU-based cnet encoder for a dialog with three turns. dt are one-hot word vectors of the system d... | {
"caption": [
"Figure 1: The proposed model with GRU-based cnet encoder for a dialog with three turns. dt are one-hot word vectors of the system dialog acts; wti correspond to the word hypotheses in the timesteps of the cnets of the user utterances; sj , uj are the cnet GRU outputs at the end of each system or u... | [
"What type of recurrent layers does the model use?",
"What is a word confusion network?"
] | [
[
"1707.05853-2-Figure1-1.png"
],
[
"1707.05853-Introduction-3"
]
] | [
"GRU",
"It is a network used to encode speech lattices to maintain a rich hypothesis space."
] | 430 |
1912.01046 | TutorialVQA: Question Answering Dataset for Tutorial Videos | Despite the number of currently available datasets on video question answering, there still remains a need for a dataset involving multi-step and non-factoid answers. Moreover, relying on video transcripts remains an under-explored topic. To adequately address this, We propose a new question answering task on instructi... | {
"paragraphs": [
[
"Video is the fastest growing medium to create and deliver information today. Consequentially, videos have been increasingly used as main data sources in many question answering problems BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF2, BIBREF5. These previous studies have mostly foc... | {
"answers": [
{
"annotation_id": [
"f466691953c65dd6bef779e24b5b85bb75615e7b"
],
"answer": [
{
"evidence": [
"Metrics. We use tolerance accuracy BIBREF16, which measures how far away the predicted span is from the gold standard span, as a metric. The ration... | {
"caption": [
"Figure 1: An illustration of our task, where the red in the timeline indicates where answers can be found in a video.",
"Table 2: Examples of question variations",
"Figure 3: Baseline models for sentence-level prediction and video segment retrieval tasks.",
"Table 3: Sentence-level pre... | [
"What evaluation metrics were used in the experiment?",
"What kind of instructional videos are in the dataset?",
"What baseline algorithms were presented?"
] | [
[
"1912.01046-Baselines ::: Baseline2: Segment retrieval-5",
"1912.01046-Baselines ::: Baseline3: Pipeline Segment retrieval-4",
"1912.01046-Baselines ::: Baseline1: Sentence-level prediction-5"
],
[
"1912.01046-Introduction-5"
],
[
"1912.01046-Baselines ::: Baseline2: Segment retrieval-... | [
"For sentence-level prediction they used tolerance accuracy, for segment retrieval accuracy and MRR and for the pipeline approach they used overall accuracy",
"tutorial videos for a photo-editing software",
"a sentence-level prediction algorithm, a segment retrieval algorithm and a pipeline segment retrieval al... | 433 |
1910.02339 | Natural- to formal-language generation using Tensor Product Representations | Generating formal-language represented by relational tuples, such as Lisp programs or mathematical expressions, from a natural-language input is an extremely challenging task because it requires to explicitly capture discrete symbolic structural information from the input to generate the output. Most state-of-the-art n... | {
"paragraphs": [
[
"When people perform explicit reasoning, they can typically describe the way to the conclusion step by step via relational descriptions. There is ample evidence that relational representations are important for human cognition (e.g., BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4). Although... | {
"answers": [
{
"annotation_id": [
"c37a3b9cec4a5d805f46b1f9775bec7c0d2edcda"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Results of AlgoLisp dataset",
"Generating Lisp programs requires sensitivity to structural information because... | {
"caption": [
"Figure 1: Overview diagram of TP-N2F.",
"Figure 2: Implementation of TP-N2F encoder.",
"Figure 3: Implementation of TP-N2F decoder.",
"Table 1: Results on MathQA dataset testing set",
"Table 2: Results of AlgoLisp dataset",
"Figure 4: K-means clustering results: MathQA with 5 c... | [
"How does TP-N2F compare to LSTM-based Seq2Seq in terms of training and inference speed?",
"What is the performance proposed model achieved on AlgoList benchmark?",
"What is the performance proposed model achieved on MathQA?"
] | [
[
"1910.02339-8-Table2-1.png",
"1910.02339-EXPERIMENTS ::: Generating program trees from natural-language descriptions-0"
],
[
"1910.02339-8-Table2-1.png",
"1910.02339-EXPERIMENTS ::: Generating program trees from natural-language descriptions-0"
],
[
"1910.02339-EXPERIMENTS ::: Generati... | [
"Full Testing Set accuracy: 84.02\nCleaned Testing Set accuracy: 93.48",
"Full Testing Set Accuracy: 84.02\nCleaned Testing Set Accuracy: 93.48",
"Operation accuracy: 71.89\nExecution accuracy: 55.95"
] | 439 |
2003.06044 | Local Contextual Attention with Hierarchical Structure for Dialogue Act Recognition | Dialogue act recognition is a fundamental task for an intelligent dialogue system. Previous work models the whole dialog to predict dialog acts, which may bring the noise from unrelated sentences. In this work, we design a hierarchical model based on self-attention to capture intra-sentence and inter-sentence informati... | {
"paragraphs": [
[
"Dialogue act (DA) characterizes the type of a speaker's intention in the course of producing an utterance and is approximately equivalent to the illocutionary act of BIBREF0 or the speech act of BIBREF1. The recognition of DA is essential for modeling and automatically detecting discour... | {
"answers": [
{
"annotation_id": [
"33e1ed1490d1fb2416e3c75974d6d437002b91b7"
],
"answer": [
{
"evidence": [
"We evaluate the performance of our model on two high-quality datasets: Switchboard Dialogue Act Corpus (SwDA) BIBREF4 and DailyDialog BIBREF24. SwD... | {
"caption": [
"Table 1: A snippet of a conversation with the DA labels from Switchboard dataset.",
"Figure 1: The model structure for DA recognition, where the LSTM with max pooling is simplified as utterance encoder in our experiment. The area in the red dashed line represents the structure for online predi... | [
"What previous methods is the proposed method compared against?"
] | [
[
"2003.06044-6-Table4-1.png",
"2003.06044-6-Table5-1.png"
]
] | [
"BLSTM+Attention+BLSTM\nHierarchical BLSTM-CRF\nCRF-ASN\nHierarchical CNN (window 4)\nmLSTM-RNN\nDRLM-Conditional\nLSTM-Softmax\nRCNN\nCNN\nCRF\nLSTM\nBERT"
] | 440 |
1902.00821 | Review Conversational Reading Comprehension | Seeking information about products and services is an important activity of online consumers before making a purchase decision. Inspired by recent research on conversational reading comprehension (CRC) on formal documents, this paper studies the task of leveraging knowledge from a huge amount of reviews to answer multi... | {
"paragraphs": [
[
"Seeking information to assess whether some products or services suit one's needs is a vital activity for consumer decision making. In online businesses, one major hindrance is that customers have limited access to answers to their specific questions or concerns about products and user e... | {
"answers": [
{
"annotation_id": [
"33f70e75e30422a4c73dd1e8e8d1609c5038174e"
],
"answer": [
{
"evidence": [
"DrQA is a CRC baseline coming with the CoQA dataset. Note that this implementation of DrQA is different from DrQA for SQuAD BIBREF8 in that it is m... | {
"caption": [
"Table 1: An Example of conversational review reading comprehension (best viewed in colors): we show dialogue with 5-turn questions that a customer asks and the review with textual span answers.",
"Table 2: Statistics of (RC)2 Datasets.",
"Table 3: Results of RCRC on EM (Exact Match) and F1... | [
"What is the baseline model used?"
] | [
[
"1902.00821-Compared Methods-2",
"1902.00821-Compared Methods-1",
"1902.00821-Compared Methods-3",
"1902.00821-Compared Methods-4",
"1902.00821-Compared Methods-5"
]
] | [
"The baseline models used are DrQA modified to support answering no answer questions, DrQA+CoQA which is pre-tuned on CoQA dataset, vanilla BERT, BERT+review tuned on domain reviews, BERT+CoQA tuned on the supervised CoQA data"
] | 442 |
2002.01359 | Schema-Guided Dialogue State Tracking Task at DSTC8 | This paper gives an overview of the Schema-Guided Dialogue State Tracking task of the 8th Dialogue System Technology Challenge. The goal of this task is to develop dialogue state tracking models suitable for large-scale virtual assistants, with a focus on data-efficient joint modeling across domains and zero-shot gener... | {
"paragraphs": [
[
"Virtual assistants help users accomplish tasks including but not limited to finding flights, booking restaurants, by providing a natural language interface to services and APIs on the web. Large-scale assistants like Google Assistant, Amazon Alexa, Apple Siri, Microsoft Cortana etc. nee... | {
"answers": [
{
"annotation_id": [
"e8bbaf26e0f73c3e2b8da0e0e3c348cb1c028643"
],
"answer": [
{
"evidence": [
"Our data collection setup uses a dialogue simulator to generate dialogue outlines first and then paraphrase them to obtain natural utterances. Usin... | {
"caption": [
"Figure 1: Example schema for a digital wallet service.",
"Figure 2: Dialogue state tracking labels after each user utterance in a dialogue in the context of two different flight services. Under the schema-guided approach, the annotations are conditioned on the schema (extreme left/right) of th... | [
"What domains are present in the data?"
] | [
[
"2002.01359-5-Table2-1.png"
]
] | [
"Alarm, Banks, Buses, Calendar, Events, Flights, Homes, Hotels, Media, Messaging, Movies, Music, Payment, Rental Cars, Restaurants, Ride Sharing, Services, Train, Travel, Weather"
] | 444 |
1612.05270 | A Simple Approach to Multilingual Polarity Classification in Twitter | Recently, sentiment analysis has received a lot of attention due to the interest in mining opinions of social media users. Sentiment analysis consists in determining the polarity of a given text, i.e., its degree of positiveness or negativeness. Traditionally, Sentiment Analysis algorithms have been tailored to a speci... | {
"paragraphs": [
[
"Sentiment analysis is a crucial task in opinion mining field where the goal is to extract opinions, emotions, or attitudes to different entities (person, objects, news, among others). Clearly, this task is of interest for all languages; however, there exists a significant gap between En... | {
"answers": [
{
"annotation_id": [
"39b20c00019c285ecaad375b7430c5805dea3c1c"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Datasets details from each competition tested in this work"
],
"extractive_spans": [],
"free... | {
"caption": [
"Table 1: Parameter list and a brief description of the functionality",
"Table 3: Datasets details from each competition tested in this work",
"Figure 1: The performance listing in four difference challenges. The horizontal lines appearing in a) to d) correspond to B4MSA’s performance. All ... | [
"How many texts/datapoints are in the SemEval, TASS and SENTIPOLC datasets?",
"In which languages did the approach outperform the reported results?"
] | [
[
"1612.05270-6-Table3-1.png"
],
[
"1612.05270-Performance on sentiment analysis contests-8",
"1612.05270-8-Table5-1.png"
]
] | [
"Total number of annotated data:\nSemeval'15: 10712\nSemeval'16: 28632\nTass'15: 69000\nSentipol'14: 6428",
"Arabic, German, Portuguese, Russian, Swedish"
] | 445 |
1705.03151 | Phonetic Temporal Neural Model for Language Identification | Deep neural models, particularly the LSTM-RNN model, have shown great potential for language identification (LID). However, the use of phonetic information has been largely overlooked by most existing neural LID methods, although this information has been used very successfully in conventional phonetic LID systems. We ... | {
"paragraphs": [
[
"Language identification (LID) lends itself to a wide range of applications, such as mixed-lingual (code-switching) speech recognition. Humans use many cues to discriminate languages, and better accuracy can be achieved with the use of more cues. Various LID approaches have been develope... | {
"answers": [
{
"annotation_id": [
"ff81191e38ade51b8a5470f1d5d6b6c495aa0a98"
],
"answer": [
{
"evidence": [
"As the first step, we build three baseline LID systems, one based on the i-vector model, and the other two based on LSTM-RNN, using the speech data... | {
"caption": [
"TABLE I: LID methods with deep learning involvement.",
"Fig. 1: LID models employing phonetic information: (a) the phonetically aware model; (b) the PTN model. Both models consist of a phonetic DNN (left) to produce phonetic features and an LID RNN (right) to make LID decisions.",
"Fig. 2:... | [
"Which is the baseline model?",
"What is the main contribution of the paper? "
] | [
[
"1705.03151-Babel: baseline of bilingual LID-2",
"1705.03151-Babel: baseline of bilingual LID-0"
],
[
"1705.03151-Motivation of the paper-0"
]
] | [
"The three baseline models are the i-vector model, a standard RNN LID system and a multi-task RNN LID system. ",
"Proposing an improved RNN model, the phonetic temporal neural LID approach, based on phonetic features that results in better performance"
] | 447 |
1810.13024 | Bi-Directional Lattice Recurrent Neural Networks for Confidence Estimation | The standard approach to mitigate errors made by an automatic speech recognition system is to use confidence scores associated with each predicted word. In the simplest case, these scores are word posterior probabilities whilst more complex schemes utilise bi-directional recurrent neural network (BiRNN) models. A numbe... | {
"paragraphs": [
[
" Recent years have seen an increased usage of spoken language technology in applications ranging from speech transcription BIBREF0 to personal assistants BIBREF1 . The quality of these applications heavily depends on the accuracy of the underlying automatic speech recognition (ASR) syst... | {
"answers": [
{
"annotation_id": [
"34b7458c20d11f1434b445401fc2b8d83829c213"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Fig. 2: Standard ASR outputs",
"A number of important downstream and upstream applications rely on accurate confidence ... | {
"caption": [
"Fig. 1: Bi-directional neural networks for confidence estimation",
"Fig. 2: Standard ASR outputs",
"Table 1: Confidence estimation performance on 1-best CN arcs",
"Table 5: Confidence estimation performance on all lattice arcs",
"Table 2: Confidence estimation performance on all CN... | [
"What is a confusion network or lattice?"
] | [
[
"1810.13024-2-Figure2-1.png"
]
] | [
"graph-like structures where arcs connect nodes representing multiple hypothesized words, thus allowing multiple incoming arcs unlike 1-best sequences"
] | 450 |
1910.08987 | Representation Learning for Discovering Phonemic Tone Contours | Tone is a prosodic feature used to distinguish words in many languages, some of which are endangered and scarcely documented. In this work, we use unsupervised representation learning to identify probable clusters of syllables that share the same phonemic tone. Our method extracts the pitch for each syllable, then trai... | {
"paragraphs": [
[
"Tonal languages use pitch to distinguish different words, for example, yi in Mandarin may mean `one', `to move', `already', or `art', depending on the pitch contour. Of over 6000 languages in the world, it is estimated that as many as 60-70% are tonal BIBREF0, BIBREF1. A few of these ar... | {
"answers": [
{
"annotation_id": [
"ab5cab754878813810a65c4344fb95cac96ea3a3"
],
"answer": [
{
"evidence": [
"We use data from Mandarin Chinese and Cantonese. For each language, the data consists of a list of spoken words, recorded by the same speaker. The ... | {
"caption": [
"Fig. 1. Pitch contours for the four Mandarin tones and six Cantonese tones in isolation, produced by native speakers. Figure adapted from [3].",
"Fig. 2. Diagram of our model architecture, consisting of a convolutional autoencoder to learn a latent representation for each pitch contour, and me... | [
"How close do clusters match to ground truth tone categories?"
] | [
[
"1910.08987-4-Table3-1.png",
"1910.08987-Results-3"
]
] | [
"NMI between cluster assignments and ground truth tones for all sylables is:\nMandarin: 0.641\nCantonese: 0.464"
] | 451 |
1701.09123 | Robust Multilingual Named Entity Recognition with Shallow Semi-Supervised Features | We present a multilingual Named Entity Recognition approach based on a robust and general set of features across languages and datasets. Our system combines shallow local information with clustering semi-supervised features induced on large amounts of unlabeled text. Understanding via empirical experimentation how to e... | {
"paragraphs": [
[
"A named entity can be mentioned using a great variety of surface forms (Barack Obama, President Obama, Mr. Obama, B. Obama, etc.) and the same surface form can refer to a variety of named entities. For example, according to the English Wikipedia, the form `Europe' can ambiguously be use... | {
"answers": [
{
"annotation_id": [
"352a13bc8abd0ed6638e3f67c48d2b8d2adbdeac"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 5: CoNLL 2003 English results."
],
"extractive_spans": [],
"free_form_answer": "Precision, Reca... | {
"caption": [
"Table 1: Datasets used for training, development and evaluation. MUC7: only three classes (LOC, ORG, PER) of the formal run are used for out-of-domain evaluation. As there are not standard partitions of SONAR-1 and Ancora 2.0, the full corpus was used for training and later evaluated in-out-of-dom... | [
"what are the evaluation metrics?",
"which datasets were used in evaluation?",
"what are the baselines?"
] | [
[
"1701.09123-15-Table5-1.png"
],
[
"1701.09123-5-Table1-1.png"
],
[
"1701.09123-Contributions-7",
"1701.09123-Local Features-0"
]
] | [
"Precision, Recall, F1",
"CoNLL 2003, GermEval 2014, CoNLL 2002, Egunkaria, MUC7, Wikigold, MEANTIME, SONAR-1, Ancora 2.0",
"Perceptron model using the local features."
] | 452 |
2002.02427 | Irony Detection in a Multilingual Context | This paper proposes the first multilingual (French, English and Arabic) and multicultural (Indo-European languages vs. less culturally close languages) irony detection system. We employ both feature-based models and neural architectures using monolingual word representation. We compare the performance of these systems ... | {
"paragraphs": [
[
"Figurative language makes use of figures of speech to convey non-literal meaning BIBREF0, BIBREF1. It encompasses a variety of phenomena, including metaphor, humor, and irony. We focus here on irony and uses it as an umbrella term that covers satire, parody and sarcasm.",
"Irony d... | {
"answers": [
{
"annotation_id": [
"d5c6be3c5eb7c1ad1fae65b8efc3bcf9689a35a5"
],
"answer": [
{
"evidence": [
"We use the previous CNN architecture with bilingual embedding and the RF model with surface features (e.g., use of personal pronoun, presence of in... | {
"caption": [
"Table 1. Tweet distribution in all corpora.",
"Table 2. Results of the monolingual experiments (in percentage) in terms of accuracy (A), precision (P), recall (R), and macro F-score (F).",
"Table 3. Results of the cross-lingual experiments."
],
"file": [
"3-Table1-1.png",
"4-Ta... | [
"What monolingual word representations are used?"
] | [
[
"2002.02427-Monolingual Irony Detection-1"
]
] | [
"AraVec for Arabic, FastText for French, and Word2vec Google News for English."
] | 453 |
1807.09671 | A Novel ILP Framework for Summarizing Content with High Lexical Variety | Summarizing content contributed by individuals can be challenging, because people make different lexical choices even when describing the same events. However, there remains a significant need to summarize such content. Examples include the student responses to post-class reflective questions, product reviews, and news... | {
"paragraphs": [
[
"Summarization is a promising technique for reducing information overload. It aims at converting long text documents to short, concise summaries conveying the essential content of the source documents BIBREF0 . Extractive methods focus on selecting important sentences from the source and... | {
"answers": [
{
"annotation_id": [
"8e9ae652ae395c711d3c51d85471832268731ff6"
],
"answer": [
{
"evidence": [
"In this work, we propose to augment the integer linear programming (ILP)-based summarization framework with a low-rank approximation of the co-occu... | {
"caption": [
"Table 1. Selected summarization data sets. Publicly available data sets are marked with an asterisk (*). The statistics involve the number of summarization tasks (Tasks), average number of documents per task (Docs/task), average word count per task (WC/task), average word count per sentence (WC/se... | [
"Do they build one model per topic or on all topics?",
"Do they quantitavely or qualitatively evalute the output of their low-rank approximation to verify the grouping of lexical items?"
] | [
[
"1807.09671-Extrinsic evaluation-9"
],
[
"1807.09671-Experiments-0",
"1807.09671-Intrinsic evaluation-6"
]
] | [
"One model per topic.",
"They evaluate quantitatively."
] | 455 |
1611.00514 | The Intelligent Voice 2016 Speaker Recognition System | This paper presents the Intelligent Voice (IV) system submitted to the NIST 2016 Speaker Recognition Evaluation (SRE). The primary emphasis of SRE this year was on developing speaker recognition technology which is robust for novel languages that are much more heterogeneous than those used in the current state-of-the-a... | {
"paragraphs": [
[
"Compared to previous years, the 2016 NIST speaker recognition evaluation (SRE) marked a major shift from English towards Austronesian and Chinese languages. The task like previous years is to perform speaker detection with the focus on telephone speech data recorded over a variety of ha... | {
"answers": [
{
"annotation_id": [
"c098177da67025e6442b6e3416945868669f695d"
],
"answer": [
{
"evidence": [
"Since the introduction of i-vectors in BIBREF0 , the speaker recognition community has seen a significant increase in recognition performance. i-Ve... | {
"caption": [
"Table 1. The description of the data used for training the speaker recognition system.",
"Fig. 1. The architecture of our DNN-HMM speech activity detection.",
"Fig. 2. The duration of test segments in the development set after dropping non-speech frames.",
"Fig. 3. Partial excerpt of 1... | [
"How well does their system perform on the development set of SRE?"
] | [
[
"1611.00514-5-Table2-1.png",
"1611.00514-Results and Discussion-0"
]
] | [
"EER 16.04, Cmindet 0.6012, Cdet 0.6107"
] | 458 |
1901.00570 | Event detection in Twitter: A keyword volume approach | Event detection using social media streams needs a set of informative features with strong signals that need minimal preprocessing and are highly associated with events of interest. Identifying these informative features as keywords from Twitter is challenging, as people use informal language to express their thoughts ... | {
"paragraphs": [
[
"Event detection is important for emergency services to react rapidly and minimize damage. For example, terrorist attacks, protests, or bushfires may require the presence of ambulances, firefighters, and police as soon as possible to save people. This research aims to detect events as so... | {
"answers": [
{
"annotation_id": [
"fcade84668262402efadc9ee49bc327eaa737535"
],
"answer": [
{
"evidence": [
"The main contributions of this paper are (1) to overcome twitter challenges of acronyms, short text, ambiguity and synonyms, (2) to identify the se... | {
"caption": [
"Fig. I: The the proposed pipeline extracts the word-pairs matching events of interest, then use the extracted word-pairs as features to detect civil unrest events.",
"Fig. 2: The spikes in the time series signal for the word-pair ('Melbourne', 'Ral') are matched with the event days that repres... | [
"Which of the classifiers showed the best performance?",
"How are the keywords associated with events such as protests selected?"
] | [
[
"1901.00570-7-TableII-1.png"
],
[
"1901.00570-Introduction-5",
"1901.00570-Introduction-4"
]
] | [
"Logistic regression",
"By using a Bayesian approach and by using word-pairs, where they extract all the pairs of co-occurring words within each tweet. They search for the words that achieve the highest number of spikes matching the days of events."
] | 468 |
2002.10832 | BERT Can See Out of the Box: On the Cross-modal Transferability of Text Representations | Pre-trained language models such as BERT have recently contributed to significant advances in Natural Language Processing tasks. Interestingly, while multilingual BERT models have demonstrated impressive results, recent works have shown how monolingual BERT can also be competitive in zero-shot cross-lingual settings. T... | {
"paragraphs": [
[
"The BERT language model BIBREF0 is a Deep Bidirectional Transformer BIBREF1 pre-trained on textual corpora (BookCorpus and Wikipedia) using a Masked Language Model (MLM) objective – predicting some words that are randomly masked in the sentence, along with a sentence entailment loss. Re... | {
"answers": [
{
"annotation_id": [
"c160258e03c07ab73b09b1897524021026c99d1c"
],
"answer": [
{
"evidence": [
"As we iteratively concatenate the generated tokens, the BERT bi-directional self-attention mechanism would impact, at every new token, the represen... | {
"caption": [
"Figure 1. Model overview. Captions are encoded via BERT embeddings, while visual embeddings (blue) are obtained via a linear layer, used to project image representations to the embedding layer dimensions.",
"Table 1. Quantitative VQG results on V QA1.0. We report results from previous works in... | [
"What is different in BERT-gen from standard BERT?",
"How are multimodal representations combined?"
] | [
[
"2002.10832-Model ::: BERT-gen: Text Generation with BERT ::: Attention Trick-0"
],
[
"2002.10832-Model ::: Representing an Image as Text-2"
]
] | [
"They use a left-to-right attention mask so that the input tokens can only attend to other input tokens, and the target tokens can only attend to the input tokens and already generated target tokens.",
"The image feature vectors are mapped into BERT embedding dimensions and treated like a text sequence afterwards... | 470 |
1709.02271 | Leveraging Discourse Information Effectively for Authorship Attribution | We explore techniques to maximize the effectiveness of discourse information in the task of authorship attribution. We present a novel method to embed discourse features in a Convolutional Neural Network text classifier, which achieves a state-of-the-art result by a substantial margin. We empirically investigate severa... | {
"paragraphs": [
[
"Authorship attribution (AA) is the task of identifying the author of a text, given a set of author-labeled training texts. This task typically makes use of stylometric cues at the surface lexical and syntactic level BIBREF0 , although BIBREF1 and BIBREF2 go beyond the sentence level, sh... | {
"answers": [
{
"annotation_id": [
"381f894273131354d32d5b877ecc961d1d8f07e8"
],
"answer": [
{
"evidence": [
"To further study the information encoded in the discourse embeddings, we perform t-SNE clustering BIBREF20 on them, using the best performing model... | {
"caption": [
"Table 2: The probability vector for the excerpt in Table 1 capturing transition probabilities of length 2.",
"Figure 1: RST tree for the first sentence of the excerpt in Table 1.",
"Table 3: The entity grid for the excerpt in Table 1, where columns are salient entities and rows are sentenc... | [
"How are discourse embeddings analyzed?",
"How are discourse features incorporated into the model?",
"What discourse features are used?"
] | [
[
"1709.02271-Analysis-4"
],
[
"1709.02271-Models-3"
],
[
"1709.02271-Models-3"
]
] | [
"They perform t-SNE clustering to analyze discourse embeddings",
"They derive entity grid with grammatical relations and RST discourse relations and concatenate them with pooling vector for the char-bigrams before feeding to the resulting vector to the softmax layer.",
"Entity grid with grammatical relations an... | 472 |
1807.08204 | Towards Neural Theorem Proving at Scale | Neural models combining representation learning and reasoning in an end-to-end trainable manner are receiving increasing interest. However, their use is severely limited by their computational complexity, which renders them unusable on real world datasets. We focus on the Neural Theorem Prover (NTP) model proposed by R... | {
"paragraphs": [
[
"Recent advancements in deep learning intensified the long-standing interests in integrating symbolic reasoning with connectionist models BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . The attraction of said integration stems from the complementing properties of these systems. Symbolic reasonin... | {
"answers": [
{
"annotation_id": [
"38422bb256daf719d511514a73c66a26a115e80c"
],
"answer": [
{
"evidence": [
"Unification Module. In backward chaining, unification between two atoms is used for checking whether they can represent the same structure. In disc... | {
"caption": [
"Figure 1. A visual depiction of the NTP’ recursive computation graph construction, applied to a toy KB (top left). Dash-separated rectangles denote proof states (left: substitutions, right: proof score -generating neural network). All the non-FAIL proof states are aggregated to obtain the final pr... | [
"What are proof paths?"
] | [
[
"1807.08204-2-Figure1-1.png"
]
] | [
"A sequence of logical statements represented in a computational graph"
] | 473 |
1704.08960 | Neural Word Segmentation with Rich Pretraining | Neural word segmentation research has benefited from large-scale raw texts by leveraging them for pretraining character and word embeddings. On the other hand, statistical segmentation research has exploited richer sources of external information, such as punctuation, automatic segmentation and POS. We investigate the ... | {
"paragraphs": [
[
"There has been a recent shift of research attention in the word segmentation literature from statistical methods to deep learning BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Neural network models have been exploited due to their strength in non-sparse representation lear... | {
"answers": [
{
"annotation_id": [
"5037fbd30dcf60b0602d223a6912d8e89b273b47"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": ... | {
"caption": [
"Table 1: A transition based word segmentation example.",
"Figure 2: Deduction system, where ⊕ denotes string concatenation.",
"Figure 1: Overall model.",
"Figure 3: Shared character representation.",
"Table 2: Hyper-parameter values.",
"Table 3: Statistics of external data.",
... | [
"What external sources are used?"
] | [
[
"1704.08960-Pretraining-1",
"1704.08960-6-Table3-1.png",
"1704.08960-Pretraining-0"
]
] | [
"Raw data from Gigaword, Automatically segmented text from Gigaword, Heterogenous training data from People's Daily, POS data from People's Daily"
] | 474 |
2002.05058 | Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models | Automated evaluation of open domain natural language generation (NLG) models remains a challenge and widely used metrics such as BLEU and Perplexity can be misleading in some cases. In our paper, we propose to evaluate natural language generation models by learning to compare a pair of generated sentences by fine-tunin... | {
"paragraphs": [
[
"Recent advances in sequence-to-sequence learning architecture BIBREF0 and the transformer model BIBREF1 have raised increasing interest in natural language generation (NLG) tasks, including story generation BIBREF2, open-domain dialogue response generation BIBREF3 and abstractive summar... | {
"answers": [
{
"annotation_id": [
"38adedaca4171f5cdb062128586f8666bb16d56b"
],
"answer": [
{
"evidence": [
"The comparative evaluator is trained with maximum likelihood estimation (MLE) objective, as described in eq DISPLAY_FORM6",
"where $\\m... | {
"caption": [
"Figure 1: model architecture of the comparative evaluator, the context is concatenated with generated samples.",
"Table 1: Sample-level correlation between metrics and human judgments, with p-values shown in brackets.",
"Table 2: Model-level correlation between metrics and human judgments,... | [
"How much better peformance is achieved in human evaluation when model is trained considering proposed metric?"
] | [
[
"2002.05058-Experiments ::: Experimental Designs & Results ::: RQ1: Sample-Level Correlation-1",
"2002.05058-6-Table1-1.png",
"2002.05058-Experiments ::: Experimental Designs & Results ::: RQ2: Model-Level Correlation-1",
"2002.05058-6-Table2-1.png"
]
] | [
"Pearson correlation to human judgement - proposed vs next best metric\nSample level comparison:\n- Story generation: 0.387 vs 0.148\n- Dialogue: 0.472 vs 0.341\nModel level comparison:\n- Story generation: 0.631 vs 0.302\n- Dialogue: 0.783 vs 0.553"
] | 475 |
2002.06675 | Speech Corpus of Ainu Folklore and End-to-end Speech Recognition for Ainu Language | Ainu is an unwritten language that has been spoken by Ainu people who are one of the ethnic groups in Japan. It is recognized as critically endangered by UNESCO and archiving and documentation of its language heritage is of paramount importance. Although a considerable amount of voice recordings of Ainu folklore has be... | {
"paragraphs": [
[
"Automatic speech recognition (ASR) technology has been made a dramatic progress and is currently brought to a pratical levels of performance assisted by large speech corpora and the introduction of deep learning techniques. However, this is not the case for low-resource languages which ... | {
"answers": [
{
"annotation_id": [
"aebcd5a6e7d1e859e2ba71199ef736b4aca05345"
],
"answer": [
{
"evidence": [
"The results of multilingual training in which the modeling unit is syllables are presented in Table 5. All error rates are the weighted averages of... | {
"caption": [
"Table 1: Speaker-wise details of the corpus",
"Table 2: Text excerpted from the prose tale ‘The Boy Who Became Porosir God’ spoken by KM.",
"Figure 1: The attention model with CTC auxiliary task.",
"Table 3: Examples of four modeling units.",
"Figure 2: The architecture of the mult... | [
"How much transcribed data is available for for Ainu language?"
] | [
[
"2002.06675-2-Table1-1.png",
"2002.06675-Ainu Speech Corpus ::: Numbers of Speakers and Episodes-0"
]
] | [
"Transcribed data is available for duration of 38h 54m 38s for 8 speakers."
] | 478 |
1909.08041 | Revealing the Importance of Semantic Retrieval for Machine Reading at Scale | Machine Reading at Scale (MRS) is a challenging task in which a system is given an input query and is asked to produce a precise output by "reading" information from a large knowledge base. The task has gained popularity with its natural combination of information retrieval (IR) and machine comprehension (MC). Advancem... | {
"paragraphs": [
[
"Extracting external textual knowledge for machine comprehensive systems has long been an important yet challenging problem. Success requires not only precise retrieval of the relevant information sparsely restored in a large knowledge source but also a deep understanding of both the sel... | {
"answers": [
{
"annotation_id": [
"82e0f7de747f8e064b5c5ad48c31e5cb6c75942e"
],
"answer": [
{
"evidence": [
"We chose the best system based on the dev set, and used that for submitting private test predictions on both FEVER and HotpotQA .",
"As... | {
"caption": [
"Figure 1: System Overview: blue dotted arrows indicate the inference flow and the red solid arrows indicate the training flow. Grey rounded rectangles are neural modules with different functionality. The two retrieval modules were trained with all positive examples from annotated ground truth set ... | [
"What baseline approaches do they compare against?"
] | [
[
"1909.08041-Results on Benchmarks-2",
"1909.08041-5-Table1-1.png",
"1909.08041-Results on Benchmarks-1",
"1909.08041-Results on Benchmarks-0",
"1909.08041-5-Table2-1.png"
]
] | [
"HotspotQA: Yang, Ding, Muppet\nFever: Hanselowski, Yoneda, Nie"
] | 479 |
1907.00854 | Katecheo: A Portable and Modular System for Multi-Topic Question Answering | We introduce a modular system that can be deployed on any Kubernetes cluster for question answering via REST API. This system, called Katecheo, includes four configurable modules that collectively enable identification of questions, classification of those questions into topics, a search of knowledge base articles, and... | {
"paragraphs": [
[
"When people interact with chatbots, smart speakers or digital assistants (e.g., Siri), one of their primary modes of interaction is information retrieval BIBREF0 . Thus, those that build dialog systems often have to tackle the problem of question answering.",
"Developers could sup... | {
"answers": [
{
"annotation_id": [
"7f8619d9280a918743612bc1fbcef60ffeeb55e6"
],
"answer": [
{
"evidence": [
"We demonstrated the utility of Katecheo by deploying the system for question answering in two topics, Medical Sciences and Christianity. These topi... | {
"caption": [
"Figure 1: The overall processing flow in Katecheo. Q represents the input question text, the dashed lines represent a flag passed between modules indicating whether the next module should proceed with processing, and the cylinders represent various data inputs to the modules.",
"Figure 2: The ... | [
"how many domains did they experiment with?"
] | [
[
"1907.00854-Example Usage-0"
]
] | [
"2"
] | 481 |
1811.01734 | Transductive Learning with String Kernels for Cross-Domain Text Classification | For many text classification tasks, there is a major problem posed by the lack of labeled data in a target domain. Although classifiers for a target domain can be trained on labeled text data from a related source domain, the accuracy of such classifiers is usually lower in the cross-domain setting. Recently, string ke... | {
"paragraphs": [
[
"",
"Domain shift is a fundamental problem in machine learning, that has attracted a lot of attention in the natural language processing and vision communities BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . To understa... | {
"answers": [
{
"annotation_id": [
"991f8b4557b5094f4ecb286448c4aa53500d177a"
],
"answer": [
{
"evidence": [
"Data set. For the cross-domain polarity classification experiments, we use the second version of Multi-Domain Sentiment Dataset BIBREF0 . The data ... | {
"caption": [
"Table 1. Multi-source cross-domain polarity classification accuracy rates (in %) of our transductive approaches versus a state-of-the-art baseline based on string kernels [13], as well as SST [3] and KE-Meta [12]. The best accuracy rates are highlighted in bold. The marker * indicates that the per... | [
"How long is the dataset?",
"What is a string kernel?"
] | [
[
"1811.01734-Polarity Classification-1"
],
[
"1811.01734-String Kernels-0"
]
] | [
"8000",
"String kernel is a technique that uses character n-grams to measure the similarity of strings"
] | 482 |
1804.08782 | Towards an Unsupervised Entrainment Distance in Conversational Speech using Deep Neural Networks | Entrainment is a known adaptation mechanism that causes interaction participants to adapt or synchronize their acoustic characteristics. Understanding how interlocutors tend to adapt to each other's speaking style through entrainment involves measuring a range of acoustic features and comparing those via multiple signa... | {
"paragraphs": [
[
"Vocal entrainment is an established social adaptation mechanism. It can be loosely defined as one speaker's spontaneous adaptation to the speaking style of the other speaker. Entrainment is a fairly complex multifaceted process and closely associated with many other mechanisms such as c... | {
"answers": [
{
"annotation_id": [
"3aa161f671fc6091fa79afb961c94e0c78098c78"
],
"answer": [
{
"evidence": [
"We use two datasets in this work: the training is done on the Fisher Corpus English Part 1 (LDC2004S13) BIBREF15 and testing on the Suicide Risk As... | {
"caption": [
"Figure 1: An overview of unsupervised training of the model",
"Table 1: Results of Experiment 1: classification accuracy (%) of real vs. fake sessions (averaged over 30 runs; standard deviation shown in parentheses)",
"Figure 2: t-SNE plot of difference vector of encoded turn-level embeddi... | [
"How do they correlate NED with emotional bond levels?"
] | [
[
"1804.08782-Experiment 2: Correlation with Emotional Bond-0"
]
] | [
"They compute Pearson’s correlation between NED measure for patient-to-therapist and patient-perceived emotional bond rating and NED measure for therapist-to-patient and patient-perceived emotional bond rating"
] | 483 |
1909.09270 | Named Entity Recognition with Partially Annotated Training Data | Supervised machine learning assumes the availability of fully-labeled data, but in many cases, such as low-resource languages, the only data available is partially annotated. We study the problem of Named Entity Recognition (NER) with partially annotated training data in which a fraction of the named entities are label... | {
"paragraphs": [
[
"Most modern approaches to NLP tasks rely on supervised learning algorithms to learn and generalize from labeled training data. While this has proven successful in high-resource scenarios, this is not realistic in many cases, such as low-resource languages, as the required amount of trai... | {
"answers": [
{
"annotation_id": [
"3b4d2e3967c74f3e895d0db0bd637ae20798486e"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 5: Bengali manual annotation results. Our methods improve on state of the art scores by over 5 points F1 given a relatively s... | {
"caption": [
"Figure 1: This example has three entities: Arsenal, Unai Emery, and Arsene Wenger. In the Partial row, the situation addressed in this paper, only the first and last are tagged, and all other tokens are assumed to be non-entities, making Unai Emery a false negative as compared to Gold. Our model i... | [
"What was their F1 score on the Bengali NER corpus?"
] | [
[
"1909.09270-9-Table5-1.png"
]
] | [
"52.0%"
] | 484 |
1806.04524 | Learning to Automatically Generate Fill-In-The-Blank Quizzes | In this paper we formalize the problem automatic fill-in-the-blank question generation using two standard NLP machine learning schemes, proposing concrete deep learning models for each. We present an empirical study based on data obtained from a language learning platform showing that both of our proposed settings offe... | {
"paragraphs": [
[
"With the advent of the Web 2.0, regular users were able to share, remix and distribute content very easily. As a result of this process, the Web became a rich interconnected set of heterogeneous data sources. Being in a standard format, it is suitable for many tasks involving knowledge ... | {
"answers": [
{
"annotation_id": [
"b1aa09cabb48baccf1c48b3cc6175de1f4d88cac"
],
"answer": [
{
"evidence": [
"Using our platform, we extracted anonymized user interaction data in the manner of real quizzes generated for a collection of several input video s... | {
"caption": [
"Figure 1: Our sequence labeling model based on an LSTM for AQG.",
"Figure 2: Our sequence classification model, based on an LSTM for AQG.",
"Table 1: Results of the seq. labeling approach."
],
"file": [
"3-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png"
]
} | [
"What is the size of the dataset?"
] | [
[
"1806.04524-Empirical Study-3"
]
] | [
"300,000 sentences with 1.5 million single-quiz questions"
] | 488 |
1612.06897 | Fast Domain Adaptation for Neural Machine Translation | Neural Machine Translation (NMT) is a new approach for automatic translation of text from one human language into another. The basic concept in NMT is to train a large Neural Network that maximizes the translation performance on a given parallel corpus. NMT is gaining popularity in the research community because it out... | {
"paragraphs": [
[
"Due to the fact that Neural Machine Translation (NMT) is reaching comparable or even better performance compared to the traditional statistical machine translation (SMT) models BIBREF0 , BIBREF1 , it has become very popular in the recent years BIBREF2 , BIBREF3 , BIBREF4 . With the grea... | {
"answers": [
{
"annotation_id": [
"3d804c3b0bb79d93ec4c38932f00a50f22b8a389"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 1: German→English: Learning curve of the continue training. Scores are given in (TERBLEU)/2 (lower is better). tst2013 is ou... | {
"caption": [
"Table 1: German→English corpus statistics for in-domain (IWSLT 2015) and out-of-domain (WMT 2015) parallel training data.",
"Table 2: Adaptation results for the German→English translation task: tst2013 is the in-domain test set and newstest2014 is the out-of-domain test set. The combined data ... | [
"How many examples do they have in the target domain?"
] | [
[
"1612.06897-6-Figure1-1.png"
]
] | [
"Around 388k examples, 194k from tst2013 (in-domain) and 194k from newstest2014 (out-of-domain)"
] | 489 |
1912.06813 | Voice Transformer Network: Sequence-to-Sequence Voice Conversion Using Transformer with Text-to-Speech Pretraining | We introduce a novel sequence-to-sequence (seq2seq) voice conversion (VC) model based on the Transformer architecture with text-to-speech (TTS) pretraining. Seq2seq VC models are attractive owing to their ability to convert prosody. While seq2seq models based on recurrent neural networks (RNNs) and convolutional neural... | {
"paragraphs": [
[
"Voice conversion (VC) aims to convert the speech from a source to that of a target without changing the linguistic content BIBREF0. Conventional VC systems follow an analysis—conversion —synthesis paradigm BIBREF1. First, a high quality vocoder such as WORLD BIBREF2 or STRAIGHT BIBREF3 ... | {
"answers": [
{
"annotation_id": [
"c1dd83c15f13f8cd920270da88880515d1be3d34"
],
"answer": [
{
"evidence": [
"We conducted our experiments on the CMU ARCTIC database BIBREF33, which contains parallel recordings of professional US English speakers sampled at... | {
"caption": [
"Figure 1: Model architecture of Transformer-TTS and VC.",
"Figure 2: Illustration of proposed TTS pretraining technique for VC.",
"Table 1: Validation-set objective evaluation results of adapted TTS, baseline (ATTS2S), and variants of the VTN trained on different sizes of data.",
"Tabl... | [
"What is the baseline model?"
] | [
[
"1912.06813-Experimental evaluation ::: Comparison with baseline method-0"
]
] | [
"a RNN-based seq2seq VC model called ATTS2S based on the Tacotron model"
] | 493 |
1903.00172 | Open Information Extraction from Question-Answer Pairs | Open Information Extraction (OpenIE) extracts meaningful structured tuples from free-form text. Most previous work on OpenIE considers extracting data from one sentence at a time. We describe NeurON, a system for extracting tuples from question-answer pairs. Since real questions and answers often contain precisely the ... | {
"paragraphs": [
[
"This document has been adapted from the instructions for earlier ACL and NAACL proceedings, including those for ACL 2018 by Shay Cohen, Kevin Gimpel, and Wei Lu, NAACL 2018 by Margaret Michell and Stephanie Lukin, 2017/2018 (NA)ACL bibtex suggestions from Jason Eisner, ACL 2017 by Dan G... | {
"answers": [
{
"annotation_id": [
"7604914dc35858e65a74ebeee5787460f37cad9b"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"evaluate"
],
"unanswe... | {
"caption": [
"Figure 1: Multi-Encoder, Constrained-Decoder model for tuple extraction from (q, a).",
"Figure 2: State diagram for tag masking rules. V is the vocabulary including placeholder tags, T is the set of placeholder tags.",
"Table 1: Various types of training instances.",
"Table 3: Precisio... | [
"Where did they get training data?",
"What extraction model did they use?",
"Which datasets did they experiment on?"
] | [
[
"1903.00172-6-Table3-1.png",
"1903.00172-7-Table4-1.png",
"1903.00172-5-Table1-1.png"
],
[
"1903.00172-3-Figure1-1.png"
],
[
"1903.00172-5-Table1-1.png"
]
] | [
"AmazonQA and ConciergeQA datasets",
"Multi-Encoder, Constrained-Decoder model",
"ConciergeQA and AmazonQA"
] | 495 |
1908.02402 | Flexibly-Structured Model for Task-Oriented Dialogues | This paper proposes a novel end-to-end architecture for task-oriented dialogue systems. It is based on a simple and practical yet very effective sequence-to-sequence approach, where language understanding and state tracking tasks are modeled jointly with a structured copy-augmented sequential decoder and a multi-label ... | {
"paragraphs": [
[
"A traditional task-oriented dialogue system is often composed of a few modules, such as natural language understanding, dialogue state tracking, knowledge base (KB) query, dialogue policy engine and response generation. Language understanding aims to convert the input to some predefined... | {
"answers": [
{
"annotation_id": [
"43b9fb24d884aaab3d7f1d0e58d0d3d2c38fbc2e"
],
"answer": [
{
"evidence": [
"This paper proposes the Flexibly-Structured Dialogue Model (FSDM) as a new end-to-end task-oriented dialogue system. The state tracking component o... | {
"caption": [
"Figure 1: FSDM architecture illustrated by a dialogue turn from the Cambridge Restaurant dataset with the following components: an input encoder (green), a belief state tracker (yellow for the informable slot values, orange for the requestable slots), a KB query component (purple), a response slot... | [
"How do slot binary classifiers improve performance?",
"What baselines have been used in this work?"
] | [
[
"1908.02402-Introduction-3"
],
[
"1908.02402-Benchmarks-2",
"1908.02402-Benchmarks-0",
"1908.02402-Benchmarks-4",
"1908.02402-Benchmarks-1",
"1908.02402-Benchmarks-3"
]
] | [
"by adding extra supervision to generate the slots that will be present in the response",
"NDM, LIDM, KVRN, and TSCP/RL"
] | 496 |
1601.02543 | Evaluating the Performance of a Speech Recognition based System | Speech based solutions have taken center stage with growth in the services industry where there is a need to cater to a very large number of people from all strata of the society. While natural language speech interfaces are the talk in the research community, yet in practice, menu based speech solutions thrive. Typica... | {
"paragraphs": [
[
"There are several commercial menu based ASR systems available around the world for a significant number of languages and interestingly speech solution based on these ASR are being used with good success in the Western part of the globe BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Typically, ... | {
"answers": [
{
"annotation_id": [
"40e8b561276384d47c4e6a0d3861d502b1fe37f3"
],
"answer": [
{
"evidence": [
"In this paper we proposed a methodology to identify words that could lead to confusion at any given node of a speech recognition based system. We u... | {
"caption": [
"Fig. 1. Schematic of a typical menu based ASR system (Wn is spoken word).",
"Fig. 2. A typical speech recognition system. In a menu based system the language model is typically the set of words that need to be recognized at a given node.",
"Fig. 3. Call flow of Indian Railway Inquiry Syste... | [
"what bottlenecks were identified?"
] | [
[
"1601.02543-Evaluation without Testing-2",
"1601.02543-Conclusion-0"
]
] | [
"Confusion in recognizing the words that are active at a given node by a speech recognition solution developed for Indian Railway Inquiry System."
] | 499 |
1811.09786 | Recurrently Controlled Recurrent Networks | Recurrent neural networks (RNNs) such as long short-term memory and gated recurrent units are pivotal building blocks across a broad spectrum of sequence modeling problems. This paper proposes a recurrently controlled recurrent network (RCRN) for expressive and powerful sequence encoding. More concretely, the key idea ... | {
"paragraphs": [
[
"Recurrent neural networks (RNNs) live at the heart of many sequence modeling problems. In particular, the incorporation of gated additive recurrent connections is extremely powerful, leading to the pervasive adoption of models such as Gated Recurrent Units (GRU) BIBREF0 or Long Short-Te... | {
"answers": [
{
"annotation_id": [
"418f10bc1c557b7218dd99a90693696afa6b9c7d"
],
"answer": [
{
"evidence": [
"On the 16 review datasets (Table TABREF22 ) from BIBREF32 , BIBREF31 , our proposed RCRN architecture achieves the highest score on all 16 datasets... | {
"caption": [
"Figure 1: High level overview of our proposed RCRN architecture.",
"Table 1: Results on the Amazon Reviews dataset. † are models implemented by us.",
"Table 4: Results on IMDb binary sentiment clasification.",
"Table 6: Results on SNLI dataset.",
"Table 10: Training and Inference t... | [
"By how much do they outperform BiLSTMs in Sentiment Analysis?"
] | [
[
"1811.09786-Overall Results-1"
]
] | [
"Proposed RCRN outperforms ablative baselines BiLSTM by +2.9% and 3L-BiLSTM by +1.1% on average across 16 datasets."
] | 505 |
1704.05907 | End-to-End Multi-View Networks for Text Classification | We propose a multi-view network for text classification. Our method automatically creates various views of its input text, each taking the form of soft attention weights that distribute the classifier's focus among a set of base features. For a bag-of-words representation, each view focuses on a different subset of the... | {
"paragraphs": [
[
"State-of-the-art deep neural networks leverage task-specific architectures to develop hierarchical representations of their input, with each layer building a refined abstraction of the layer that came before it BIBREF0 . For text classification, one can think of this as a single reader ... | {
"answers": [
{
"annotation_id": [
"d19713335548014b47fb0ad5fb6ef1211aa18b21"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Accuracies on the Stanford Sentiment Treebank 5-class classification task; except for the MVN, all results are drawn from ... | {
"caption": [
"Figure 1: A MVN architecture with four views.",
"Table 1: Accuracies on the Stanford Sentiment Treebank 5-class classification task; except for the MVN, all results are drawn from (Lei et al., 2015).",
"Figure 2: Accuracies obtained by varying the number of views.",
"Table 3: Error rat... | [
"what state of the accuracy did they obtain?",
"what models did they compare to?",
"which benchmark tasks did they experiment on?"
] | [
[
"1704.05907-3-Table1-1.png"
],
[
"1704.05907-Stanford Sentiment Treebank-2"
],
[
"1704.05907-Introduction-3"
]
] | [
"51.5",
"High-order CNN, Tree-LSTM, DRNN, DCNN, CNN-MC, NBoW and SVM ",
" They used Stanford Sentiment Treebank benchmark for sentiment classification task and AG English news corpus for the text classification task."
] | 506 |
2001.08051 | TLT-school: a Corpus of Non Native Children Speech | This paper describes "TLT-school" a corpus of speech utterances collected in schools of northern Italy for assessing the performance of students learning both English and German. The corpus was recorded in the years 2017 and 2018 from students aged between nine and sixteen years, attending primary, middle and high scho... | {
"paragraphs": [
[
"We have acquired large sets of both written and spoken data during the implementation of campaigns aimed at assessing the proficiency, at school, of Italian pupils learning both German and English. Part of the acquired data has been included in a corpus, named \"Trentino Language Testin... | {
"answers": [
{
"annotation_id": [
"abd7e0e7e34db65f0bdc4df7633ae463ec8c8752"
],
"answer": [
{
"evidence": [
"It is worth mentioning that the collected texts contain a large quantity of errors of several types: orthographic, syntactic, code-switched words (... | {
"caption": [
"Table 1: Evaluation of L2 linguistic competences in Trentino: level, grade, age and number of pupils participating in the evaluation campaigns. Most of the pupils did both the English and the German tests.",
"Table 2: Written data collected during different evaluation campaigns. Column “#Q” in... | [
"How is the proficiency score calculated?",
"What proficiency indicators are used to the score the utterances?",
"What accuracy is achieved by the speech recognition system?",
"How is the speech recognition system evaluated?",
"How many of the utterances are transcribed?",
"How many utterances are in the ... | [
[
"2001.08051-Data Acquisition-1",
"2001.08051-3-Table4-1.png",
"2001.08051-Data Acquisition-2"
],
[
"2001.08051-Data Acquisition-1",
"2001.08051-3-Table4-1.png"
],
[
"2001.08051-Usage of the Data ::: ASR-related Challenges-1",
"2001.08051-5-Table8-1.png"
],
[
"2001.08051... | [
"They used 6 indicators for proficiency (same for written and spoken) each marked by bad, medium or good by one expert.",
"6 indicators:\n- lexical richness\n- pronunciation and fluency\n- syntactical correctness\n- fulfillment of delivery\n- coherence and cohesion\n- communicative, descriptive, narrative skills"... | 513 |
1611.03382 | Efficient Summarization with Read-Again and Copy Mechanism | Encoder-decoder models have been widely used to solve sequence to sequence prediction tasks. However current approaches suffer from two shortcomings. First, the encoders compute a representation of each word taking into account only the history of the words it has read so far, yielding suboptimal representations. Secon... | {
"paragraphs": [
[
"Encoder-decoder models have been widely used in sequence to sequence tasks such as machine translation ( BIBREF0 , BIBREF1 ). They consist of an encoder which represents the whole input sequence with a single feature vector. The decoder then takes this representation and generates the d... | {
"answers": [
{
"annotation_id": [
"58cd181cc6341f7e303e2d75cbda1c0bea6e2eec"
],
"answer": [
{
"evidence": [
"Evaluation on DUC2004: DUC 2004 ( BIBREF15 ) is a commonly used benchmark on summarization task consisting of 500 news articles. Each article is pa... | {
"caption": [
"Figure 1: Read-Again Summarization Model",
"Figure 2: Read-Again Model",
"Figure 3: Hierachical Read-Again",
"Table 1: Different Read-Again Model. Ours denotes Read-Again models. C denotes copy mechanism. Ours-Opt-1 and Ours-Opt-2 are the models described in section 3.1.3. Size denotes... | [
"By how much does their model outperform both the state-of-the-art systems?",
"What is the state-of-the art?"
] | [
[
"1611.03382-Quantitative Evaluation-1",
"1611.03382-8-Table2-1.png"
],
[
"1611.03382-Summarization-2"
]
] | [
"w.r.t Rouge-1 their model outperforms by 0.98% and w.r.t Rouge-L their model outperforms by 0.45%",
"neural attention model with a convolutional encoder with an RNN decoder and RNN encoder-decoder"
] | 514 |
1909.06937 | CM-Net: A Novel Collaborative Memory Network for Spoken Language Understanding | Spoken Language Understanding (SLU) mainly involves two tasks, intent detection and slot filling, which are generally modeled jointly in existing works. However, most existing models fail to fully utilize co-occurrence relations between slots and intents, which restricts their potential performance. To address this iss... | {
"paragraphs": [
[
"Spoken Language Understanding (SLU) is a core component in dialogue systems. It typically aims to identify the intent and semantic constituents for a given utterance, which are referred as intent detection and slot filling, respectively. Past years have witnessed rapid developments in d... | {
"answers": [
{
"annotation_id": [
"4420a87163b36ed7a76ec7e62953a92d0b55e147"
],
"answer": [
{
"evidence": [
"We collect utterances from the $\\mathbf {C}$hinese $\\mathbf {A}$rtificial $\\mathbf {I}$ntelligence $\\mathbf {S}$peakers (CAIS), and annotate th... | {
"caption": [
"Figure 1: Statistical association of slot tags (on the left) and intent labels (on the right) in the SNIPS, where colors indicate different intents and thicknesses of lines indicate proportions.",
"Table 1: Examples in SNIPS with annotations of intent label for the utterance and slot tags for ... | [
"What was the performance on the self-collected corpus?",
"What is the size of their dataset?"
] | [
[
"1909.06937-8-Table6-1.png"
],
[
"1909.06937-5-Table2-1.png"
]
] | [
"F1 scores of 86.16 on slot filling and 94.56 on intent detection",
"10,001 utterances"
] | 516 |
1905.10044 | BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions | In this paper we study yes/no questions that are naturally occurring --- meaning that they are generated in unprompted and unconstrained settings. We build a reading comprehension dataset, BoolQ, of such questions, and show that they are unexpectedly challenging. They often query for complex, non-factoid information, a... | {
"paragraphs": [
[
"Understanding what facts can be inferred to be true or false from text is an essential part of natural language understanding. In many cases, these inferences can go well beyond what is immediately stated in the text. For example, a simple sentence like “Hanna Huyskova won the gold meda... | {
"answers": [
{
"annotation_id": [
"5e90675cae548d25dc1b0665ced21ff19a8da6e4"
],
"answer": [
{
"evidence": [
"Our Recurrent +ELMo model uses the language model from BIBREF9 to provide contextualized embeddings to the baseline model outlined above, as recomm... | {
"caption": [
"Figure 1: Example yes/no questions from the BoolQ dataset. Each example consists of a question (Q), an excerpt from a passage (P), and an answer (A) with an explanation added for clarity.",
"Table 1: Question categorization of BoolQ. Question topics are shown in the top half and question types... | [
"how was the dataset built?"
] | [
[
"1905.10044-Data Collection-2",
"1905.10044-Data Collection-1",
"1905.10044-Data Collection-3"
]
] | [
"Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Que... | 519 |
1708.04557 | Database of Parliamentary Speeches in Ireland, 1919-2013 | We present a database of parliamentary debates that contains the complete record of parliamentary speeches from D\'ail \'Eireann, the lower house and principal chamber of the Irish parliament, from 1919 to 2013. In addition, the database contains background information on all TDs (Teachta D\'ala, members of parliament)... | {
"paragraphs": [
[
"Almost all political decisions and political opinions are, in one way or another, expressed in written or spoken texts. Great leaders in history become famous for their ability to motivate the masses with their speeches; parties publish policy programmes before elections in order to pro... | {
"answers": [
{
"annotation_id": [
"45153f9f1b03de7a91909986aba397780858f75c"
],
"answer": [
{
"evidence": [
"To estimate speakers' position we use Wordscore BIBREF1 – a version of the Naive Bayes classifier that is deployed for text categorization problems... | {
"caption": [
"Fig. 1. Finance ministers’ policy positions as estimated from all budget speeches (1922–2009) with an overlaid linear regression line.",
"Fig. 2. The Irish economy over time: Inflation (1923–2008), Per Capita GDP growth (annual %; 1961–2008) and unemployment rate (1956–2008).",
"Fig. 4. Es... | [
"what processing was done on the speeches before being parsed?"
] | [
[
"1708.04557-Speakers' Policy Position in the 2008 Budget Debate-1"
]
] | [
"Remove numbers and interjections"
] | 520 |
2003.12932 | User Generated Data: Achilles' heel of BERT | Pre-trained language models such as BERT are known to perform exceedingly well on various NLP tasks and have even established new State-Of-The-Art (SOTA) benchmarks for many of these tasks. Owing to its success on various tasks and benchmark datasets, industry practitioners have started to explore BERT to build applica... | {
"paragraphs": [
[
"In recent times, pre-trained contextual language models have led to significant improvement in the performance for many NLP tasks. Among the family of these models, the most popular one is BERT BIBREF0, which is also the focus of this work. The strength of the BERT model FIGREF2 stems f... | {
"answers": [
{
"annotation_id": [
"4586a85a0c6b7494ed0d33cf42a776425ebf3db4"
],
"answer": [
{
"evidence": [
"Figure FIGREF12stsa and FIGREF12stsb shows Pearson-Spearman correlation vs % of error for textual semantic similarity on STS-B dataset after fine t... | {
"caption": [
"Figure 1: BERT architecture [1]",
"Figure 2: The Transformer model architecture [2]",
"Table 1: Number of utterances in each datasets",
"Figure 3: F1 score vs % of error for Sentiment analysis on IMDB dataset",
"Figure 4: F1 score vs % of error for Sentiment analysis on SST-2 data"... | [
"What is the performance change of the textual semantic similarity task when no error and maximum errors (noise) are present?",
"Which sentiment analysis data set has a larger performance drop when a 10% error is introduced?"
] | [
[
"2003.12932-Results-2",
"2003.12932-5-Figure5-1.png"
],
[
"2003.12932-Results-1",
"2003.12932-4-Figure3-1.png",
"2003.12932-Results-0"
]
] | [
"10 Epochs: pearson-Spearman correlation drops 60 points when error increase by 20%\n50 Epochs: pearson-Spearman correlation drops 55 points when error increase by 20%",
"SST-2 dataset"
] | 522 |
2002.08307 | Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning | Universal feature extractors, such as BERT for natural language processing and VGG for computer vision, have become effective methods for improving deep learning models without requiring more labeled data. A common paradigm is to pre-train a feature extractor on large amounts of data then fine-tune it as part of a deep... | {
"paragraphs": [
[
"Pre-trained feature extractors, such as BERT BIBREF0 for natural language processing and VGG BIBREF1 for computer vision, have become effective methods for improving the performance of deep learning models. In the last year, models similar to BERT have become state-of-the-art in many NL... | {
"answers": [
{
"annotation_id": [
"458ca650f9fcfbab21f6524e5cd7cceb82103d7c"
],
"answer": [
{
"evidence": [
"Model compression BIBREF7, which attempts to shrink a model without losing accuracy, is a viable approach to decreasing GPU usage. It might also be... | {
"caption": [
"Figure 1: (Blue) The best GLUE dev accuracy and training losses for models pruned during pretraining, averaged over 5 tasks. Also shown are models with information deletion during pre-training (orange), models pruned after downstream fine-tuning (green), and models pruned randomly during pre-train... | [
"How much is pre-training loss increased in Low/Medium/Hard level of pruning?"
] | [
[
"2002.08307-Pruning Regimes ::: How Much Is A Bit Of BERT Worth?-0",
"2002.08307-5-Figure2-1.png"
]
] | [
"The increase is linearly from lowest on average 2.0 , medium around 3.5, and the largest is 6.0"
] | 523 |
1707.08559 | Video Highlight Prediction Using Audience Chat Reactions | Sports channel video portals offer an exciting domain for research on multimodal, multilingual analysis. We present methods addressing the problem of automatic video highlight prediction based on joint visual features and textual analysis of the real-world audience discourse with complex slang, in both English and trad... | {
"paragraphs": [
[
"On-line eSports events provide a new setting for observing large-scale social interaction focused on a visual story that evolves over time—a video game. While watching sporting competitions has been a major source of entertainment for millennia, and is a significant part of today's cult... | {
"answers": [
{
"annotation_id": [
"66530989d292be1a7585169dd36fcae82e7cd385"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": ... | {
"caption": [
"Figure 1: Pictures of Broadcasting platforms:(a) Twitch: League of Legends Tournament Broadcasting, (b) Youtube: News Channel, (c)Facebook: Personal live sharing",
"Figure 2: Highlight Labeling: (a) The feature representation of each frame is calculated by averaging each color channel in each ... | [
"What is the average length of the recordings?",
"What were their results?"
] | [
[
"1707.08559-Data Collection-1"
],
[
"1707.08559-5-Table3-1.png"
]
] | [
"40 minutes",
"Best model achieved F-score 74.7 on NALCS and F-score of 70.0 on LMS on test set"
] | 525 |
1912.10806 | DP-LSTM: Differential Privacy-inspired LSTM for Stock Prediction Using Financial News | Stock price prediction is important for value investments in the stock market. In particular, short-term prediction that exploits financial news articles is promising in recent years. In this paper, we propose a novel deep neural network DP-LSTM for stock price prediction, which incorporates the news articles as hidden... | {
"paragraphs": [
[
"Stock prediction is crucial for quantitative analysts and investment companies. Stocks' trends, however, are affected by a lot of factors such as interest rates, inflation rates and financial news [12]. To predict stock prices accurately, one must use these variable information. In part... | {
"answers": [
{
"annotation_id": [
"ec5a4453d309c8ff8acdb8ce53c33fc8450e31d3"
],
"answer": [
{
"evidence": [
"Figure FIGREF29 shows the $\\text{MPAs}$ of the proposed DP-LSTM and vanilla LSTM for comparison. In Table TABREF30, we give the mean MPA results f... | {
"caption": [
"Figure 1: NLTK processing. For preprocessing, each news title will be tokenized into individual words. Then applying SentimentIntensityAnalyzer from NLTK vadar to calculate the polarity score.",
"Figure 2: Positive wordcloud (left) and negative wordcloud (right). We divide the news based on th... | [
"What is the prediction accuracy of the model?",
"What is the dataset used in the paper?"
] | [
[
"1912.10806-7-Table1-1.png",
"1912.10806-8-Table2-1.png"
],
[
"1912.10806-Training DP-LSTM Neural Network ::: Data Preprocessing and Normalization ::: Data Preprocessing-0"
]
] | [
"mean prediction accuracy 0.99582651\nS&P 500 Accuracy 0.99582651",
"historical S&P 500 component stocks\n 306242 news articles"
] | 528 |
1904.09708 | Compositional generalization in a deep seq2seq model by separating syntax and semantics | Standard methods in deep learning for natural language processing fail to capture the compositional structure of human language that allows for systematic generalization outside of the training distribution. However, human learners readily generalize in this way, e.g. by applying known grammatical rules to novel words.... | {
"paragraphs": [
[
"A crucial property underlying the expressive power of human language is its systematicity BIBREF0 , BIBREF1 : syntactic or grammatical rules allow arbitrary elements to be combined in novel ways, making the number of sentences possible in a language to be exponential in the number of it... | {
"answers": [
{
"annotation_id": [
"46604f501a5888f4a17f691db884d1b3bb1f8674"
],
"answer": [
{
"evidence": [
"A recently published dataset called SCAN BIBREF2 (Simplified version of the CommAI Navigation tasks), tests compositional generalization in a seque... | {
"caption": [
"Figure 1: Simplified illustration of out-of-domain (o.o.d.) extrapolation required by SCAN compositional generalization task. Shapes represent the distribution of all possible command sequences. In a simple split, train and test data are independent and identically distributed (i.i.d.), but in the... | [
"How does the SCAN dataset evaluate compositional generalization?"
] | [
[
"1904.09708-Introduction-1"
]
] | [
"it systematically holds out inputs in the training set containing basic primitive verb, \"jump\", and tests on sequences containing that verb."
] | 529 |
1902.09393 | Cooperative Learning of Disjoint Syntax and Semantics | There has been considerable attention devoted to models that learn to jointly infer an expression's syntactic structure and its semantics. Yet, Nangia and Bowman (2018) has recently shown that the current best systems fail to learn the correct parsing strategy on mathematical expressions generated from a simple context... | {
"paragraphs": [
[
"This document has been adapted from the instructions for earlier ACL and NAACL proceedings, including those for ACL 2018 by Shay Cohen, Kevin Gimpel, and Wei Lu, NAACL 2018 by Margaret Michell and Stephanie Lukin, 2017/2018 (NA)ACL bibtex suggestions from Jason Eisner, ACL 2017 by Dan G... | {
"answers": [
{
"annotation_id": [
"467cf1dcbdef566076d9bc7a2a7d97c3f35e8706"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Accuracy on the ListOps dataset. All models have 128 dimensions. Results for models with * are taken from Nangia and Bowma... | {
"caption": [
"Table 1: Accuracy on the ListOps dataset. All models have 128 dimensions. Results for models with * are taken from Nangia and Bowman (2018).",
"Figure 1: Blue crosses depict an average accuracy of five models on the test examples that have lengths within certain range. Black circles illustrate... | [
"How much does this system outperform prior work?",
"What are the baseline systems that are compared against?"
] | [
[
"1902.09393-5-Table1-1.png"
],
[
"1902.09393-5-Table1-1.png"
]
] | [
"The system outperforms by 27.7% the LSTM model, 38.5% the RL-SPINN model and 41.6% the Gumbel Tree-LSTM",
"The system is compared to baseline models: LSTM, RL-SPINN and Gumbel Tree-LSTM"
] | 531 |
1909.13695 | Non-native Speaker Verification for Spoken Language Assessment | Automatic spoken language assessment systems are becoming more popular in order to handle increasing interests in second language learning. One challenge for these systems is to detect malpractice. Malpractice can take a range of forms, this paper focuses on detecting when a candidate attempts to impersonate another in... | {
"paragraphs": [
[
"Automatic spoken assessment systems are becoming increasingly popular, especially for English with the high demand around the world for learning of English as a second language BIBREF0, BIBREF1, BIBREF2, BIBREF3. In addition to assessing a candidate's English ability such as fluency and... | {
"answers": [
{
"annotation_id": [
"e194de11e39aa308d74523cbeb863f6fef17f5da"
],
"answer": [
{
"evidence": [
"The Business Language Testing Service (BULATS) test of Cambridge Assessment English BIBREF27 is a multi-level computer-based English test. It consi... | {
"caption": [
"Fig. 1. Diagram of an automatic spoken language assessment system.",
"Table 2. % EER performance of VoxCeleb-based systems on BULATS and Linguaskill test sets.",
"Table 1. % EER performance of BULATS-trained baseline systems on BULATS and Linguaskill test sets.",
"Fig. 2. DET curves of... | [
"What systems are tested?"
] | [
[
"1909.13695-4-Table1-1.png",
"1909.13695-Experimental results ::: Baseline system performance-2",
"1909.13695-Experimental results ::: Baseline system performance-3",
"1909.13695-4-Table2-1.png"
]
] | [
"BULATS i-vector/PLDA\nBULATS x-vector/PLDA\nVoxCeleb x-vector/PLDA\nPLDA adaptation (X1)\n Extractor fine-tuning (X2) "
] | 532 |
1601.01705 | Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and structured knowledge bases. The model uses natural language strings to automatically assemble neural networks from a collection of composable modules. Parameters for these modules are learned jointly with network-assembly parameters via reinforcemen... | {
"paragraphs": [
[
"This paper presents a compositional, attentional model for answering questions about a variety of world representations, including images and structured knowledge bases. The model translates from questions to dynamically assembled neural networks, then applies these networks to world re... | {
"answers": [
{
"annotation_id": [
"46e5ea281159cbd3c2de92d958a1f2a835593916"
],
"answer": [
{
"evidence": [
"Our first task is the recently-introduced Visual Question Answering challenge (VQA) BIBREF22 . The VQA dataset consists of more than 200,000 images... | {
"caption": [
"Figure 1: A learned syntactic analysis (a) is used to assemble a collection of neural modules (b) into a deep neural network (c), and applied to a world representation (d) to produce an answer.",
"Figure 2: Simple neural module networks, corresponding to the questions What color is the bird? a... | [
"What benchmark datasets they use?"
] | [
[
"1601.01705-Questions about images-0",
"1601.01705-Questions about geography-0"
]
] | [
"VQA and GeoQA"
] | 535 |
1910.08772 | MonaLog: a Lightweight System for Natural Language Inference Based on Monotonicity | We present a new logic-based inference engine for natural language inference (NLI) called MonaLog, which is based on natural logic and the monotonicity calculus. In contrast to existing logic-based approaches, our system is intentionally designed to be as lightweight as possible, and operates using a small set of well-... | {
"paragraphs": [
[
"There has been rapid progress on natural language inference (NLI) in the last several years, due in large part to recent advances in neural modeling BIBREF0 and the introduction of several new large-scale inference datasets BIBREF1, BIBREF2, BIBREF3, BIBREF4. Given the high performance ... | {
"answers": [
{
"annotation_id": [
"eb8e6c6ed92b80cc42bac8e39e480ca641184eeb"
],
"answer": [
{
"evidence": [
"To show the effectiveness of our approach, we show results on the SICK dataset BIBREF1, a common benchmark for logic-based NLI, and find MonaLog to... | {
"caption": [
"Figure 1: An illustration of our general monotonicity reasoning pipeline using an example premise and hypothesis pair: All schoolgirls are on the train and All happy schoolgirls are on the train.",
"Figure 2: Example search tree for SICK 340, where P is A schoolgirl with a black bag is on a cr... | [
"How do they combine MonaLog with BERT?",
"How do they select monotonicity facts?"
] | [
[
"1910.08772-Introduction-3",
"1910.08772-Introduction-1",
"1910.08772-MonaLog and SICK-0"
],
[
"1910.08772-Our System: MonaLog ::: Knowledge Base @!START@${K}$@!END@ and Sentence Base @!START@${S}$@!END@-0"
]
] | [
"They use Monalog for data-augmentation to fine-tune BERT on this task",
"They derive it from Wordnet"
] | 537 |
1909.11467 | Developing a Fine-Grained Corpus for a Less-resourced Language: the case of Kurdish | Kurdish is a less-resourced language consisting of different dialects written in various scripts. Approximately 30 million people in different countries speak the language. The lack of corpora is one of the main obstacles in Kurdish language processing. In this paper, we present KTC-the Kurdish Textbooks Corpus, which ... | {
"paragraphs": [
[
"Kurdish is an Indo-European language mainly spoken in central and eastern Turkey, northern Iraq and Syria, and western Iran. It is a less-resourced language BIBREF0, in other words, a language for which general-purpose grammars and raw internet-based corpora are the main existing resour... | {
"answers": [
{
"annotation_id": [
"82024c55c3aabf914b901626dcaf5d5dcd9a3f56"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no":... | {
"caption": [
"Table 1: Statistics of the corpus - In the Course Level column, (i) represents Institute2 .",
"Figure 1: Common tokens among textbook subjects."
],
"file": [
"2-Table1-1.png",
"3-Figure1-1.png"
]
} | [
"What are the 12 categories devised?"
] | [
[
"1909.11467-2-Table1-1.png"
]
] | [
"Economics, Genocide, Geography, History, Human Rights, Kurdish, Kurdology, Philosophy, Physics, Theology, Sociology, Social Study"
] | 541 |
1804.08186 | Automatic Language Identification in Texts: A Survey | Language identification (LI) is the problem of determining the natural language that a document or part thereof is written in. Automatic LI has been extensively researched for over fifty years. Today, LI is a key part of many text processing pipelines, as text processing techniques generally assume that the language of... | {
"paragraphs": [
[
"Language identification (“”) is the task of determining the natural language that a document or part thereof is written in. Recognizing text in a specific language comes naturally to a human reader familiar with the language. intro:langid presents excerpts from Wikipedia articles in dif... | {
"answers": [
{
"annotation_id": [
"487eb0ec2a10b179d7312cc807155ea0d3f21f1a"
],
"answer": [
{
"evidence": [
"The most common approach is to treat the task as a document-level classification problem. Given a set of evaluation documents, each having a known ... | {
"caption": [
"Table 1: Excerpts from Wikipedia articles on NLP in different languages.",
"Table 2: List of articles (2013–2017) where relative frequencies of character n-grams have been used as features. The columns indicate the length of the n-grams used. “***” indicates the empirically best n-gram length ... | [
"what are the off-the-shelf systems discussed in the paper?"
] | [
[
"1804.08186-Off-the-Shelf Language Identifiers-10",
"1804.08186-Off-the-Shelf Language Identifiers-7",
"1804.08186-Off-the-Shelf Language Identifiers-4",
"1804.08186-Off-the-Shelf Language Identifiers-3",
"1804.08186-Off-the-Shelf Language Identifiers-8",
"1804.08186-Off-the-Shelf Language... | [
"Answer with content missing: (Names of many identifiers missing) TextCat, ChromeCLD, LangDetect, langid.py, whatlang, whatthelang, YALI, LDIG, Polyglot 3000, Lextek Language Identifier and Open Xerox Language Identifier."
] | 542 |
1909.05438 | Neural Semantic Parsing in Low-Resource Settings with Back-Translation and Meta-Learning | Neural semantic parsing has achieved impressive results in recent years, yet its success relies on the availability of large amounts of supervised data. Our goal is to learn a neural semantic parser when only prior knowledge about a limited number of simple rules is available, without access to either annotated program... | {
"paragraphs": [
[
"Semantic parsing aims to map natural language questions to the logical forms of their underlying meanings, which can be regarded as programs and executed to yield answers, aka denotations BIBREF0 . In the past few years, neural network based semantic parsers have achieved promising perf... | {
"answers": [
{
"annotation_id": [
"691175658de1ec8838a63a134a2b95d7b926bf32"
],
"answer": [
{
"evidence": [
"Following the back-translation paradigm BIBREF3 , BIBREF4 , we have a semantic parser, which maps a natural language question INLINEFORM0 to a logi... | {
"caption": [
"Figure 1: An illustration of the difference between (a) data combination which learns a monolithic, one-size-fits-all model, (b) self-training which learns from predictions which the model produce and (c) meta-learning that reuse the acquired ability to learn.",
"Table 1: Results on WikiSQL te... | [
"How many rules had to be defined?"
] | [
[
"1909.05438-Table-Based Semantic Parsing-1",
"1909.05438-Knowledge-Based Question Answering-2",
"1909.05438-Conversational Table-Based Semantic Parsing-3"
]
] | [
"WikiSQL - 2 rules (SELECT, WHERE)\nSimpleQuestions - 1 rule\nSequentialQA - 3 rules (SELECT, WHERE, COPY)"
] | 546 |
2003.08370 | Distant Supervision and Noisy Label Learning for Low Resource Named Entity Recognition: A Study on Hausa and Yor\`ub\'a | The lack of labeled training data has limited the development of natural language processing tools, such as named entity recognition, for many languages spoken in developing countries. Techniques such as distant and weak supervision can be used to create labeled data in a (semi-) automatic way. Additionally, to allevia... | {
"paragraphs": [
[
"Named Entity Recognition (NER) is a classification task that identifies words in a text that refer to entities (such as dates, person, organization and location names). It is a core task of natural language processing and a component for many downstream applications like search engines,... | {
"answers": [
{
"annotation_id": [
"4a0e4c2d0a6c8bd5658f150baf05c1b5bd17ae7a"
],
"answer": [
{
"evidence": [
"The Hausa data used in this paper is part of the LORELEI language pack. It consists of Broad Operational Language Translation (BOLT) data gathered ... | {
"caption": [
"Figure 1: F1-scores and standard error for Yorùbá."
],
"file": [
"5-Figure1-1.png"
]
} | [
"What was performance of classifiers before/after using distant supervision?"
] | [
[
"2003.08370-5-Figure1-1.png",
"2003.08370-Results-2"
]
] | [
"Bi-LSTM: For low resource <17k clean data: Using distant supervision resulted in huge boost of F1 score (1k eg. ~9 to ~36 wit distant supervision)\nBERT: <5k clean data boost of F1 (1k eg. ~32 to ~47 with distant supervision)"
] | 547 |
1909.00361 | Cross-Lingual Machine Reading Comprehension | Though the community has made great progress on Machine Reading Comprehension (MRC) task, most of the previous works are solving English-based MRC problems, and there are few efforts on other languages mainly due to the lack of large-scale training data. In this paper, we propose Cross-Lingual Machine Reading Comprehen... | {
"paragraphs": [
[
"Machine Reading Comprehension (MRC) has been a popular task to test the reading ability of the machine, which requires to read text material and answer the questions based on it. Starting from cloze-style reading comprehension, various neural network approaches have been proposed and ma... | {
"answers": [
{
"annotation_id": [
"766f8876da0a213c6760a3239ec9afff1d8d5940"
],
"answer": [
{
"evidence": [
"We evaluate our approaches on two public Chinese span-extraction machine reading comprehension datasets: CMRC 2018 (simplified Chinese) BIBREF8 and... | {
"caption": [
"Figure 1: Back-translation approaches for cross-lingual machine reading comprehension (Left: GNMT, Middle: Answer Aligner, Right: Answer Verifier)",
"Figure 2: System overview of the Dual BERT model for cross-lingual machine reading comprehension task.",
"Table 1: Statistics of CMRC 2018 a... | [
"How big are the datasets used?"
] | [
[
"1909.00361-Experiments ::: Experimental Setups-5",
"1909.00361-6-Table1-1.png",
"1909.00361-Experiments ::: Experimental Setups-1",
"1909.00361-Experiments ::: Experimental Setups-0"
]
] | [
"Evaluation datasets used:\nCMRC 2018 - 18939 questions, 10 answers\nDRCD - 33953 questions, 5 answers\nNIST MT02/03/04/05/06/08 Chinese-English - Not specified\n\nSource language train data:\nSQuAD - Not specified"
] | 549 |
1908.11546 | Modeling Multi-Action Policy for Task-Oriented Dialogues | Dialogue management (DM) plays a key role in the quality of the interaction with the user in a task-oriented dialogue system. In most existing approaches, the agent predicts only one DM policy action per turn. This significantly limits the expressive power of the conversational agent and introduces unwanted turns of in... | {
"paragraphs": [
[
"In a task-oriented dialogue system, the dialogue manager policy module predicts actions usually in terms of dialogue acts and domain specific slots. It is a crucial component that influences the efficiency (e.g., the conciseness and smoothness) of the communication between the user and ... | {
"answers": [
{
"annotation_id": [
"b8a64a4d67c7487ae8a64a3e227bfde8f3fa45d9"
],
"answer": [
{
"evidence": [
"The experiment dataset comes from Microsoft Research (MSR) . It contains three domains: movie, taxi, and restaurant. The total count of dialogues p... | {
"caption": [
"Table 1: Dialogue example.",
"Figure 1: CAS decoder: at each step, a tuple of (continue, act, slots) is produced. The KB vector k regarding the queried result from knowledge base is not shown for brevity.",
"Figure 2: The gated CAS recurrent cell contains three units: continue unit, act un... | [
"What datasets are used for training/testing models? ",
"How better is gCAS approach compared to other approaches?",
"What is specific to gCAS cell?"
] | [
[
"1908.11546-Experiments-0"
],
[
"1908.11546-3-Table5-1.png"
],
[
"1908.11546-Introduction-2"
]
] | [
"Microsoft Research dataset containing movie, taxi and restaurant domains.",
"For entity F1 in the movie, taxi and restaurant domain it results in scores of 50.86, 64, and 60.35. For success, it results it outperforms in the movie and restaurant domain with scores of 77.95 and 71.52",
"It has three sequentiall... | 550 |
1905.10238 | Incorporating Context and External Knowledge for Pronoun Coreference Resolution | Linking pronominal expressions to the correct references requires, in many cases, better analysis of the contextual information and external knowledge. In this paper, we propose a two-layer model for pronoun coreference resolution that leverages both context and external knowledge, where a knowledge attention mechanism... | {
"paragraphs": [
[
"The question of how human beings resolve pronouns has long been of interest to both linguistics and natural language processing (NLP) communities, for the reason that pronoun itself has weak semantic meaning BIBREF0 and brings challenges in natural language understanding. To explore sol... | {
"answers": [
{
"annotation_id": [
"4a8b2c06fd45fcf979f93c96736e5d54824a1515"
],
"answer": [
{
"evidence": [
"The CoNLL-2012 shared task BIBREF21 corpus is used as the evaluation dataset, which is selected from the Ontonotes 5.0. Following conventional appr... | {
"caption": [
"Figure 1: Pronoun coreference examples, where each example requires different knowledge for its resolution. Blue bold font refers to the target pronoun, where the correct noun reference and other candidates are marked by green underline and brackets, respectively.",
"Figure 2: The architecture... | [
"What is the source of external knowledge?"
] | [
[
"1905.10238-Knowledge Types-1"
]
] | [
"counts of predicate-argument tuples from English Wikipedia"
] | 551 |
1905.07464 | A Multi-Task Learning Framework for Extracting Drugs and Their Interactions from Drug Labels | Preventable adverse drug reactions as a result of medical errors present a growing concern in modern medicine. As drug-drug interactions (DDIs) may cause adverse reactions, being able to extracting DDIs from drug labels into machine-readable form is an important effort in effectively deploying drug safety information. ... | {
"paragraphs": [
[
"Preventable adverse drug reactions (ADRs) introduce a growing concern in the modern healthcare system as they represent a large fraction of hospital admissions and play a significant role in increased health care costs BIBREF0 . Based on a study examining hospital admission data, it is ... | {
"answers": [
{
"annotation_id": [
"4b637c3e214b64e96036644ab3eec3bbe4c98e77"
],
"answer": [
{
"evidence": [
"Each drug label is a collection of sections (e.g., DOSAGE & ADMINISTRATION, CONTRAINDICATIONS, and WARNINGS) where each section contains one or mor... | {
"caption": [
"Figure 1: An example illustrating the DDI task",
"Table 1: Characteristics of datasets",
"Table 2: Example of the tagging scheme",
"Figure 2: The multi-task neural network for DDI extraction",
"Table 3: Preliminary results based on 11-fold cross validation over Training-22 with two... | [
"What were the sizes of the test sets?"
] | [
[
"1905.07464-3-Table1-1.png",
"1905.07464-Datasets-0"
]
] | [
"Test set 1 contained 57 drug labels and 8208 sentences and test set 2 contained 66 drug labels and 4224 sentences"
] | 553 |
1901.09755 | Language Independent Sequence Labelling for Opinion Target Extraction | In this research note we present a language independent system to model Opinion Target Extraction (OTE) as a sequence labelling task. The system consists of a combination of clustering features implemented on top of a simple set of shallow local features. Experiments on the well known Aspect Based Sentiment Analysis (A... | {
"paragraphs": [
[
"Opinion Mining and Sentiment Analysis (OMSA) are crucial for determining opinion trends and attitudes about commercial products, companies reputation management, brand monitoring, or to track attitudes by mining social media, etc. Furthermore, given the explosion of information produced... | {
"answers": [
{
"annotation_id": [
"751287ba1c4e73b9590faaddd00cf51270c324d8"
],
"answer": [
{
"evidence": [
"Early approaches to Opinion Target Extraction (OTE) were unsupervised, although later on the vast majority of works have been based on supervised a... | {
"caption": [
"Table 1: ABSA SemEval 2014-2016 datasets for the restaurant domain. B-target indicates the number of opinion targets in each set; I-target refers to the number of multiword targets.",
"Table 2: Unlabeled corpora to induce clusters. For each corpus and cluster type the number of words (in milli... | [
"Which datasets are used?"
] | [
[
"1901.09755-6-Table1-1.png",
"1901.09755-Unlabelled Corpora-1",
"1901.09755-Unlabelled Corpora-0",
"1901.09755-ABSA Datasets-0"
]
] | [
"ABSA SemEval 2014-2016 datasets\nYelp Academic Dataset\nWikipedia dumps"
] | 556 |
2002.05829 | HULK: An Energy Efficiency Benchmark Platform for Responsible Natural Language Processing | Computation-intensive pretrained models have been taking the lead of many natural language processing benchmarks such as GLUE. However, energy efficiency in the process of model training and inference becomes a critical bottleneck. We introduce HULK, a multi-task energy efficiency benchmarking platform for responsible ... | {
"paragraphs": [
[
"Environmental concerns of machine learning research has been rising as the carbon emission of certain tasks like neural architecture search reached an exceptional “ocean boiling” level BIBREF7. Increased carbon emission has been one of the key factors to aggravate global warming . Resea... | {
"answers": [
{
"annotation_id": [
"8dbaaaf5f00c916eb0c8489aac1e64d873dfa347"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Pretraining costs of baseline models. Hardware and pretraining time are collected from original papers, with which costs a... | {
"caption": [
"Table 1: Pretraining costs of baseline models. Hardware and pretraining time are collected from original papers, with which costs are estimated with current TPU price at $8 per hour with 4 core TPU v3 chips and V100 GPU at $3.06 per hour. DistilBERT model is trained upon a pretrained BERT model. P... | [
"How much does it minimally cost to fine-tune some model according to benchmarking framework?",
"What models are included in baseline benchmarking results?"
] | [
[
"2002.05829-2-Table1-1.png"
],
[
"2002.05829-2-Table1-1.png"
]
] | [
"$1,728",
"BERT, XLNET RoBERTa, ALBERT, DistilBERT"
] | 558 |
1912.00864 | Conclusion-Supplement Answer Generation for Non-Factoid Questions | This paper tackles the goal of conclusion-supplement answer generation for non-factoid questions, which is a critical issue in the field of Natural Language Processing (NLP) and Artificial Intelligence (AI), as users often require supplementary information before accepting a conclusion. The current encoder-decoder fram... | {
"paragraphs": [
[
"Question Answering (QA) modules play particularly important roles in recent dialog-based Natural Language Understanding (NLU) systems, such as Apple's Siri and Amazon's Echo. Users chat with AI systems in natural language to get the answers they are seeking. QA systems can deal with two... | {
"answers": [
{
"annotation_id": [
"4dad94cc08e748b6b136ea2a0f974adc393ccc66"
],
"answer": [
{
"evidence": [
"NAGMWA is much better than the other methods except NAGM, since it generates answers whose conclusions and supplements as well as their combination... | {
"caption": [
"Figure 1: Neural conclusion-supplement answer generation model.",
"Table 1: Results when changing α.",
"Table 2: Results when using sentence-type embeddings.",
"Table 6: Human evaluation (nfL6).",
"Table 4: ROUGE-L/BLEU-4 for nfL6.",
"Table 7: Example answers generated by CLSTM... | [
"How much more accurate is the model than the baseline?"
] | [
[
"1912.00864-Evaluation ::: Results ::: Human evaluation-3",
"1912.00864-Evaluation ::: Results ::: Human evaluation-1",
"1912.00864-Evaluation ::: Results ::: Performance-1",
"1912.00864-6-Table4-1.png",
"1912.00864-6-Table6-1.png"
]
] | [
"For the Oshiete-goo dataset, the NAGM model's ROUGE-L score is higher than the highest performing conventional model, Trans, by 0.021, and its BLEU-4 score is higher than the highest performing model CLSTM by 0.037. For the nfL6 dataset, the NAGM model's ROUGE-L score is higher than the highest performing convent... | 563 |
1910.11204 | Syntax-Enhanced Self-Attention-Based Semantic Role Labeling | As a fundamental NLP task, semantic role labeling (SRL) aims to discover the semantic roles for each predicate within one sentence. This paper investigates how to incorporate syntactic knowledge into the SRL task effectively. We present different approaches of encoding the syntactic information derived from dependency ... | {
"paragraphs": [
[
"The task of semantic role labeling (SRL) is to recognize arguments for a given predicate in one sentence and assign labels to them, including “who” did “what” to “whom”, “when”, “where”, etc. Figure FIGREF1 is an example sentence with both semantic roles and syntactic dependencies. Sinc... | {
"answers": [
{
"annotation_id": [
"e939615a4ca7e4e5b67ab7c21b0cb07526f873eb"
],
"answer": [
{
"evidence": [
"Table TABREF46 shows that our Open model achieves more than 3 points of f1-score than the state-of-the-art result, and RelAwe with DepPath&RelPath ... | {
"caption": [
"Figure 1: An example of one sentence with its syntactic dependency tree and semantic roles. Arcs above the sentence are semantic role annotations for the predicate “鼓励 (encourage)” and below the sentence are syntactic dependency annotations of the whole sentence. The meaning of this sentence is “C... | [
"What is new state-of-the-art performance on CoNLL-2009 dataset?",
"What are two strong baseline methods authors refer to?"
] | [
[
"1910.11204-Experiment ::: Final Results on the Chinese Test Data-1",
"1910.11204-8-Table7-1.png"
],
[
"1910.11204-8-Table7-1.png"
]
] | [
"In closed setting 84.22 F1 and in open 87.35 F1.",
"Marcheggiani and Titov (2017) and Cai et al. (2018)"
] | 564 |
2003.07758 | Multi-modal Dense Video Captioning | Dense video captioning is a task of localizing interesting events from an untrimmed video and producing textual description (captions) for each localized event. Most of the previous works in dense video captioning are solely based on visual information and completely ignore the audio track. However, audio, and speech, ... | {
"paragraphs": [
[
"The substantial amount of freely available video material has brought up the need for automatic methods to summarize and compactly represent the essential content. One approach would be to produce a short video skim containing the most important video segments as proposed in the video s... | {
"answers": [
{
"annotation_id": [
"b5db9e6889d00da544fc266e280bf4a20a1560dd"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 5. The results are split for category and version of MDVC. The number of samples per category is given in parenthesis. The M... | {
"caption": [
"Figure 1. Example video with ground truth captions and predictions of Multi-modal Dense Video Captioning module (MDVC). It may account for any number of modalities, i. e. audio or speech.",
"Figure 2. The proposed Multi-modal Dense Video Captioning (MDVC) framework. Given an input consisting o... | [
"How many category tags are considered?",
"What domain does the dataset fall into?"
] | [
[
"2003.07758-8-Figure5-1.png"
],
[
"2003.07758-Experiments ::: Dataset-0"
]
] | [
"14 categories",
"YouTube videos"
] | 565 |
1906.09774 | Emotionally-Aware Chatbots: A Survey | Textual conversational agent or chatbots' development gather tremendous traction from both academia and industries in recent years. Nowadays, chatbots are widely used as an agent to communicate with a human in some services such as booking assistant, customer service, and also a personal partner. The biggest challenge ... | {
"paragraphs": [
[
"Conversational agents or dialogue systems development are gaining more attention from both industry and academia BIBREF0 , BIBREF1 in the latest years. Some works tried to model them into domain-specific tasks such as customer service BIBREF2 , BIBREF3 , and shopping assistance BIBREF4 ... | {
"answers": [
{
"annotation_id": [
"62311b8985db0206af5e9c477b8860c910940c0a"
],
"answer": [
{
"evidence": [
"We characterize the evaluation of Emotionally-Aware Chatbot into two different parts, qualitative and quantitative assessment. Qualitative assessme... | {
"caption": [
"Table 1: Summarization of the Proposed Approaches for Emotionally-Aware Chatbot Development.",
"Table 2: Summarization of dataset available for emotionally-aware chatbot.",
"Table 3: Summarization of the available affective resources for emotion classification task."
],
"file": [
"... | [
"How are EAC evaluated?"
] | [
[
"1906.09774-Quantitative Assessment-1",
"1906.09774-Quantitative Assessment-0",
"1906.09774-Qualitative Assessment-0",
"1906.09774-Evaluating EAC-0"
]
] | [
"Qualitatively through efficiency, effectiveness and satisfaction aspects and quantitatively through metrics such as precision, recall, accuracy, BLEU score and even human judgement."
] | 573 |
1810.03459 | Multilingual sequence-to-sequence speech recognition: architecture, transfer learning, and language modeling | Sequence-to-sequence (seq2seq) approach for low-resource ASR is a relatively new direction in speech research. The approach benefits by performing model training without using lexicon and alignments. However, this poses a new problem of requiring more data compared to conventional DNN-HMM systems. In this work, we atte... | {
"paragraphs": [
[
"The sequence-to-sequence (seq2seq) model proposed in BIBREF0 , BIBREF1 , BIBREF2 is a neural architecture for performing sequence classification and later adopted to perform speech recognition in BIBREF3 , BIBREF4 , BIBREF5 . The model allows to integrate the main blocks of ASR such as ... | {
"answers": [
{
"annotation_id": [
"84557b762ca8e23100b16065c4a0968337b65221"
],
"answer": [
{
"evidence": [
"In this work, the experiments are conducted using the BABEL speech corpus collected from the IARPA babel program. The corpus is mainly composed of ... | {
"caption": [
"Fig. 1: Hybrid attention/CTC network with LM extension: the shared encoder is trained by both CTC and attention model objectives simultaneously. The joint decoder predicts an output label sequence by the CTC, attention decoder and RNN-LM.",
"Table 1: Details of the BABEL data used for performi... | [
"What languages do they use?"
] | [
[
"1810.03459-5-Table1-1.png",
"1810.03459-Data details and experimental setup-0"
]
] | [
"Train languages are: Cantonese, Bengali, Pashto, Turkish, Vietnamese, Haitian, Tamil, Kurdish, Tokpisin and Georgian, while Assamese, Tagalog, Swahili, Lao are used as target languages."
] | 579 |
1910.06061 | Feature-Dependent Confusion Matrices for Low-Resource NER Labeling with Noisy Labels | In low-resource settings, the performance of supervised labeling models can be improved with automatically annotated or distantly supervised data, which is cheap to create but often noisy. Previous works have shown that significant improvements can be reached by injecting information about the confusion between clean a... | {
"paragraphs": [
[
"Most languages, even with millions of speakers, have not been the center for natural language processing and are counted as low-resource for tasks like named entity recognition (NER). Similarly, even for high-resource languages, there exists only few labeled data for most entity types b... | {
"answers": [
{
"annotation_id": [
"53a7f2513d6f0a4bb020aa66ae6c6d0d7e06154c"
],
"answer": [
{
"evidence": [
"We follow the BiLSTM architecture from BIBREF3. Only the optimizer was changed for all models to NADAM BIBREF22 as this helped with convergence pro... | {
"caption": [
"Figure 1: Visualization of the noisy labels, confusion matrix architecture. The dotted line shows the proposed new dependency.",
"Figure 2: Confusion matrices used for initialization when training with the English dataset. The global matrix is given as well as five of the feature-dependent mat... | [
"How they evaluate their approach?"
] | [
[
"1910.06061-Introduction-2"
]
] | [
"They evaluate newly proposed models in several low-resource settings across different languages with real, distantly supervised data with non-synthetic noise"
] | 581 |
2002.10361 | Multilingual Twitter Corpus and Baselines for Evaluating Demographic Bias in Hate Speech Recognition | Existing research on fairness evaluation of document classification models mainly uses synthetic monolingual data without ground truth for author demographic attributes. In this work, we assemble and publish a multilingual Twitter corpus for the task of hate speech detection with inferred four author demographic factor... | {
"paragraphs": [
[
"While document classification models should be objective and independent from human biases in documents, research have shown that the models can learn human biases and therefore be discriminatory towards particular demographic groups BIBREF0, BIBREF1, BIBREF2. The goal of fairness-aware... | {
"answers": [
{
"annotation_id": [
"dff3eab59a33a4e6cab7e948decc6b4f2b5a6569"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Statistical summary of multilingual corpora across English, Italian, Polish, Portuguese and Spanish. We present number of ... | {
"caption": [
"Table 1: Statistical summary of multilingual corpora across English, Italian, Polish, Portuguese and Spanish. We present number of users (Users), documents (Docs), and average tokens per document (Tokens) in the corpus, plus the label distribution (HS Ratio, percent of documents labeled positive f... | [
"How large is the corpus?",
"How large is the dataset?"
] | [
[
"2002.10361-2-Table1-1.png"
],
[
"2002.10361-2-Table1-1.png"
]
] | [
"It contains 106,350 documents",
"over 104k documents"
] | 582 |
1810.10254 | Learn to Code-Switch: Data Augmentation using Copy Mechanism on Language Modeling | Building large-scale datasets for training code-switching language models is challenging and very expensive. To alleviate this problem using parallel corpus has been a major workaround. However, existing solutions use linguistic constraints which may not capture the real data distribution. In this work, we propose a no... | {
"paragraphs": [
[
"Language mixing has been a common phenomenon in multilingual communities. It is motivated in response to social factors as a way of communication in a multicultural society. From a sociolinguistic perspective, individuals do code-switching in order to construct an optimal interaction by... | {
"answers": [
{
"annotation_id": [
"6e34b0f40598e906c0faf7e69f3c420b204047f9"
],
"answer": [
{
"evidence": [
"In this section, we present the experimental settings for pointer-generator network and language model. Our experiment, our pointer-generator model... | {
"caption": [
"Fig. 1. Pointer Generator Networks [8]. The figure shows the example of input and 3-best generated sentences.",
"Table 3. Language Modeling Results (in perplexity).",
"Table 1. Data Statistics of SEAME Phase II and Generated Sequences using Pointer-generator Network [10].",
"Table 2. C... | [
"What was their perplexity score?",
"What parallel corpus did they use?"
] | [
[
"1810.10254-Results-0",
"1810.10254-3-Table3-1.png"
],
[
"1810.10254-Training Setup-0"
]
] | [
"Perplexity score 142.84 on dev and 138.91 on test",
"Parallel monolingual corpus in English and Mandarin"
] | 585 |
1703.06492 | VQABQ: Visual Question Answering by Basic Questions | Taking an image and question as the input of our method, it can output the text-based answer of the query question about the given image, so called Visual Question Answering (VQA). There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input... | {
"paragraphs": [
[
"Visual Question Answering (VQA) is a challenging and young research field, which can help machines achieve one of the ultimate goals in computer vision, holistic scene understanding BIBREF1 . VQA is a computer vision task: a system is given an arbitrary text-based question about an imag... | {
"answers": [
{
"annotation_id": [
"e3aa84e3788c28eab6a7e9354580f34bebd4a36c"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 4. Evaluation results on VQA dataset [1]. ”-” indicates the results are not available, and the Ours+VGG(1) and Ours+VGG(2) ar... | {
"caption": [
"Figure 1. Examples of basic questions. Note that MQ denotes the main question and BQ denotes the basic question.",
"Figure 2. VQABQ working pipeline. Note that all of the training and validation questions are only encoded by Skip-Thoughts one time for generating the basic question matrix. That... | [
"In which setting they achieve the state of the art?"
] | [
[
"1703.06492-6-Table4-1.png"
]
] | [
"in open-ended task esp. for counting-type questions "
] | 587 |
1701.08118 | Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis | Some users of social media are spreading racist, sexist, and otherwise hateful content. For the purpose of training a hate speech detection system, the reliability of the annotations is crucial, but there is no universally agreed-upon definition. We collected potentially hateful messages and asked two groups of interne... | {
"paragraphs": [
[
"Social media are sometimes used to disseminate hateful messages. In Europe, the current surge in hate speech has been linked to the ongoing refugee crisis. Lawmakers and social media sites are increasingly aware of the problem and are developing approaches to deal with it, for example p... | {
"answers": [
{
"annotation_id": [
"f1e8a4a248ed425ebe2bb19c2036df5d051020b5"
],
"answer": [
{
"evidence": [
"As previously mentioned, there is no German hate speech corpus available for our needs, especially not for the very recent topic of the refugee cri... | {
"caption": [
"Table 1: Summary statistics with p values and effect size estimates from WMW tests. Not all participants chose to report their age or gender.",
"Figure 1: Reliability (Krippendorff’s a) for the different groups and questions"
],
"file": [
"4-Table1-1.png",
"5-Figure1-1.png"
]
} | [
"Was the degree of offensiveness taken as how generally offensive the text was, or how personally offensive it was to the annotator?"
] | [
[
"1701.08118-Methods-1"
]
] | [
"Personal thought of the annotator."
] | 589 |
1905.09866 | Fair is Better than Sensational:Man is to Doctor as Woman is to Doctor | Analogies such as man is to king as woman is to X are often used to illustrate the amazing power of word embeddings. Concurrently, they have also exposed how strongly human biases are encoded in vector spaces built on natural language. While finding that queen is the answer to man is to king as woman is to X leaves us ... | {
"paragraphs": [
[
"Word embeddings are distributed representations of texts which capture similarities between words. Beside improving a wide variety of NLP tasks, the power of word embeddings is often also tested intrinsically. Together with the idea of training word embeddings, BIBREF0 introduced the id... | {
"answers": [
{
"annotation_id": [
"565b347c90ccb0779998b23cb28133de086e5e22"
],
"answer": [
{
"evidence": [
"For both word2vec BIBREF0 and gensim BIBREF7 we adapted the code so that the input terms of the analogy query are allowed to be returned. Throughou... | {
"caption": [
"Table 1: Performance on the standard analogy test set (Mikolov et al. 2013a) using the original and the fair versions of the analogy code. The fair version allows for any term in the vocabulary to be returned, including the input terms, while the original one does not allow any of the input terms ... | [
"Which embeddings do they detect biases in?"
] | [
[
"1905.09866-Experimental details-0"
]
] | [
"Word embeddings trained on GoogleNews and Word embeddings trained on Reddit dataset"
] | 590 |
1912.09152 | Annotating and normalizing biomedical NEs with limited knowledge | Named entity recognition (NER) is the very first step in the linguistic processing of any new domain. It is currently a common process in BioNLP on English clinical text. However, it is still in its infancy in other major languages, as it is the case for Spanish. Presented under the umbrella of the PharmaCoNER shared t... | {
"paragraphs": [
[
"Named Entity Recognition (ner) is considered a necessary first step in the linguistic processing of any new domain, as it facilitates the development of applications showing co-occurrences of domain entities, cause-effect relations among them, and, eventually, it opens the (still to be ... | {
"answers": [
{
"annotation_id": [
"9b85f5604c00aea1554d310d1f44b13b82b684ae"
],
"answer": [
{
"evidence": [
"In this paper, in spite of previous statements, we present a system that uses rule-based and dictionary-based methods combined (in a way we prefer ... | {
"caption": [
"Table 1: Results for PHARMACONER test dataset (both subtasks)"
],
"file": [
"5-Table1-1.png"
]
} | [
"What are the two PharmaCoNER subtasks?"
] | [
[
"1912.09152-Resource building ::: SNOMED CT-0"
]
] | [
"Entity identification with offset mapping and concept indexing"
] | 591 |
2004.02451 | An analysis of the utility of explicit negative examples to improve the syntactic abilities of neural language models | We explore the utilities of explicit negative examples in training neural language models. Negative examples here are incorrect words in a sentence, such as"barks"in"*The dogs barks". Neural language models are commonly trained only on positive examples, a set of sentences in the training data, but recent studies sugge... | {
"paragraphs": [
[
"intro",
"Despite not being exposed to explicit syntactic supervision, neural language models (LMs), such as recurrent neural networks, are able to generate fluent and natural sentences, suggesting that they induce syntactic knowledge about the language to some extent. However, it ... | {
"answers": [
{
"annotation_id": [
"d2bdb156962cce873e49e9b76f7be8e78341198e"
],
"answer": [
{
"evidence": [
"Since our focus in this paper is an additional loss exploiting negative examples (Section method), we fix the baseline LM throughout the experiment... | {
"caption": [
"Table 1: Comparison of syntactic dependency evaluation accuracies across different types of dependencies and perplexities. Numbers in parentheses are standard deviations. M&L18 is the result of two-layer LSTM-LM in Marvin and Linzen (2018). K19 is the result of distilled two-layer LSTM-LM from RNN... | [
"How do they perform data augmentation?"
] | [
[
"2004.02451-Limitations of LSTM-LMs ::: Setup-0"
]
] | [
"They randomly sample sentences from Wikipedia that contains an object RC and add them to training data"
] | 592 |
1702.06777 | Dialectometric analysis of language variation in Twitter | In the last few years, microblogging platforms such as Twitter have given rise to a deluge of textual data that can be used for the analysis of informal communication between millions of individuals. In this work, we propose an information-theoretic approach to geographic language variation using a corpus based on Twit... | {
"paragraphs": [
[
"Dialects are language varieties defined across space. These varieties can differ in distinct linguistic levels (phonetic, morphosyntactic, lexical), which determine a particular regional speech BIBREF0 . The extension and boundaries (always diffuse) of a dialect area are obtained from t... | {
"answers": [
{
"annotation_id": [
"5748b2ccabb3dd76238f8f9c7a33bd6bd7ab96e3"
],
"answer": [
{
"evidence": [
"After averaging over all concepts, we lose information on the lexical variation that each concept presents but on the other hand one can now invest... | {
"caption": [
"Figure 1: Heatmap of Spanish tweets geolocated in Europe. There exist 11208831 tweets arising from a language detection and tokenization procedure. We have zoomed in those arising in Spain, Portugal and the south of France.",
"Figure 2: Spatial distribution of a few representative concepts bas... | [
"What are the characteristics of the city dialect?",
"What are the characteristics of the rural dialect?"
] | [
[
"1702.06777-Global distance-1"
],
[
"1702.06777-Global distance-1"
]
] | [
"Lexicon of the cities tend to use most forms of a particular concept",
"It uses particular forms of a concept rather than all of them uniformly"
] | 593 |
1912.00582 | BLiMP: A Benchmark of Linguistic Minimal Pairs for English | We introduce The Benchmark of Linguistic Minimal Pairs (shortened to BLiMP), a challenge set for evaluating what language models (LMs) know about major grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each containing 1000 minimal pairs isolating specific contrasts in syntax, morphology, or semantics... | {
"paragraphs": [
[
"Current neural networks for language understanding rely heavily on unsupervised pretraining tasks like language modeling. However, it is still an open question what degree of knowledge state-of-the-art language models (LMs) acquire about different linguistic phenomena. Many recent studi... | {
"answers": [
{
"annotation_id": [
"adb326cffa46d1988d6b73a7a8a5106163f9f460"
],
"answer": [
{
"evidence": [
"An LM's overall performance on *X can be measured simply by taking the proportion of correct predictions across the 67,000 minimal pairs from all p... | {
"caption": [],
"file": []
} | [
"What is the performance of the models on the tasks?"
] | [
[
"1912.00582-Results ::: Overall Results-0",
"1912.00582-Results-0"
]
] | [
"Overall accuracy per model is: 5-gram (60.5), LSTM (68.9), TXL (68.7), GPT-2 (80.1)"
] | 595 |
1705.10586 | Character-Based Text Classification using Top Down Semantic Model for Sentence Representation | Despite the success of deep learning on many fronts especially image and speech, its application in text classification often is still not as good as a simple linear SVM on n-gram TF-IDF representation especially for smaller datasets. Deep learning tends to emphasize on sentence level semantics when learning a represen... | {
"paragraphs": [
[
"Recently, deep learning has been particularly successful in speech and image as an automatic feature extractor BIBREF1 , BIBREF2 , BIBREF3 , however deep learning's application to text as an automatic feature extractor has not been always successful BIBREF0 even compared to simple linea... | {
"answers": [
{
"annotation_id": [
"587358249a7de0e5f3d815eb49e85fd171d99617"
],
"answer": [
{
"evidence": [
"is the standard word counting method whereby the feature vector represents the term frequency of the words in a sentence.",
"is similar... | {
"caption": [
"Figure 1: Illustration of transformation from character-level embeddings to word-level topic-vector. The transformation is done with fully convolutional network (FCN) similar to (Long et al., 2015), each hierarchical level of the FCN will extract an n-gram character feature of the word until the w... | [
"What other non-neural baselines do the authors compare to? "
] | [
[
"1705.10586-Baseline Models-2",
"1705.10586-Baseline Models-1",
"1705.10586-Baseline Models-3"
]
] | [
"bag of words, tf-idf, bag-of-means"
] | 598 |
1909.01958 | From 'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project | AI has achieved remarkable mastery over games such as Chess, Go, and Poker, and even Jeopardy, but the rich variety of standardized exams has remained a landmark challenge. Even in 2016, the best AI system achieved merely 59.3% on an 8th Grade science exam challenge. ::: This paper reports unprecedented success on the... | {
"paragraphs": [
[
"This paper reports on the history, progress, and lessons from the Aristo project, a six-year quest to answer grade-school and high-school science exams. Aristo has recently surpassed 90% on multiple choice questions from the 8th Grade New York Regents Science Exam (see Figure FIGREF6). ... | {
"answers": [
{
"annotation_id": [
"5a770b359fa76d55d4cb3c238b4614e73e03f539"
],
"answer": [
{
"evidence": [
"The current configuration of Aristo comprises of eight solvers, described shortly, each of which attempts to answer a multiple choice question. To ... | {
"caption": [
"Figure 2: Aristo’s scores on Regents 8th Grade Science (non-diagram, multiple choice) over time (held-out test set).",
"Figure 3: The Tuple Inference Solver retrieves tuples relevant to the question, and constructs a support graph for each answer option. Here, the support graph for the choice ... | [
"On what dataset is Aristo system trained?"
] | [
[
"1909.01958-The Aristo System ::: Overview-6",
"1909.01958-7-Table3-1.png",
"1909.01958-Experiments and Results ::: Experimental Methodology ::: Dataset Formulation-1"
]
] | [
"Aristo Corpus\nRegents 4th\nRegents 8th\nRegents `12th\nARC-Easy\nARC-challenge "
] | 600 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.