id
stringlengths
10
10
title
stringlengths
19
145
abstract
stringlengths
273
1.91k
full_text
dict
qas
dict
figures_and_tables
dict
question
list
retrieval_gt
list
answer_gt
list
__index_level_0__
int64
0
887
1603.04553
Unsupervised Ranking Model for Entity Coreference Resolution
Coreference resolution is one of the first stages in deep language understanding and its importance has been well recognized in the natural language processing community. In this paper, we propose a generative, unsupervised ranking model for entity coreference resolution by introducing resolution mode variables. Our un...
{ "paragraphs": [ [ "Entity coreference resolution has become a critical component for many Natural Language Processing (NLP) tasks. Systems requiring deep language understanding, such as information extraction BIBREF2 , semantic event learning BIBREF3 , BIBREF4 , and named entity linking BIBREF5 , BIBREF6 ...
{ "answers": [ { "annotation_id": [ "f4cf4054065d62aef6d53f8571b081345695a0b6" ], "answer": [ { "evidence": [ "According to previous work BIBREF17 , BIBREF18 , BIBREF1 , antecedents are resolved by different categories of information for different mentions. ...
{ "caption": [ "Table 1: Feature set for representing a mention under different resolution modes. The Distance feature is for parameter q, while all other features are for parameter t.", "Table 2: Corpora statistics. “ON-Dev” and “ON-Test” are the development and testing sets of the OntoNotes corpus.", "T...
[ "What are resolution model variables?", "Is the model presented in the paper state of the art?" ]
[ [ "1603.04553-Resolution Mode Variables-0" ], [ "1603.04553-Results and Comparison-1" ] ]
[ "Variables in the set {str, prec, attr} indicating in which mode the mention should be resolved.", "No, supervised models perform better for this task." ]
189
1709.10217
The First Evaluation of Chinese Human-Computer Dialogue Technology
In this paper, we introduce the first evaluation of Chinese human-computer dialogue technology. We detail the evaluation scheme, tasks, metrics and how to collect and annotate the data for training, developing and test. The evaluation includes two tasks, namely user intent classification and online testing of task-orie...
{ "paragraphs": [ [ "Recently, human-computer dialogue has been emerged as a hot topic, which has attracted the attention of both academia and industry. In research, the natural language understanding (NLU), dialogue management (DM) and natural language generation (NLG) have been promoted by the technologie...
{ "answers": [ { "annotation_id": [ "38e82d8bcf6c074c9c9690831b23216b9e65f5e8" ], "answer": [ { "evidence": [ "From Figure FIGREF6 , we can see that it is quite different between the open domain chit-chat system and the task-oriented dialogue system. For the...
{ "caption": [ "Figure 1: A brief comparison of the open domain chit-chat system and the task-oriented dialogue system.", "Table 1: An example of user intent with category information.", "Table 2: An example of the task-oriented human-computer dialogue.", "Table 3: The statistics of the released data ...
[ "What was the result of the highest performing system?" ]
[ [ "1709.10217-5-Table6-1.png", "1709.10217-Evaluation Results-1", "1709.10217-4-Table5-1.png", "1709.10217-Evaluation Results-0", "1709.10217-4-Table4-1.png" ] ]
[ "For task 1 best F1 score was 0.9391 on closed and 0.9414 on open test.\nFor task2 best result had: Ratio 0.3175 , Satisfaction 64.53, Fluency 0, Turns -1 and Guide 2" ]
190
1901.02262
Multi-style Generative Reading Comprehension
This study tackles generative reading comprehension (RC), which consists of answering questions based on textual evidence and natural language generation (NLG). We propose a multi-style abstractive summarization model for question answering, called Masque. The proposed model has two key characteristics. First, unlike m...
{ "paragraphs": [ [ "Question answering has been a long-standing research problem. Recently, reading comprehension (RC), a challenge to answer a question given textual evidence provided in a document set, has received much attention. Here, current mainstream studies have treated RC as a process of extractin...
{ "answers": [ { "annotation_id": [ "0d82c8d3a311a9f695cae5bd50584efe3d67651c" ], "answer": [ { "evidence": [ "Table 2 shows that our ensemble model, controlled with the NLG and Q&A styles, achieved state-of-the-art performance on the NLG and Q&A tasks in te...
{ "caption": [ "Figure 1: Visualization of how our model generates an answer on MS MARCO. Given an answer style (top: NLG, bottom: Q&A), the model controls the mixture of three distributions for generating words from a vocabulary and copying words from the question and multiple passages at each decoding step.", ...
[ "What do they mean by answer styles?", "What are the baselines that Masque is compared against?", "What is the performance achieved on NarrativeQA?", "What is an \"answer style\"?" ]
[ [ "1901.02262-Setup-0" ], [ "1901.02262-8-Table5-1.png", "1901.02262-6-Table2-1.png" ], [ "1901.02262-8-Table5-1.png" ], [ "1901.02262-Setup-0" ] ]
[ "well-formed sentences vs concise answers", "BiDAF, Deep Cascade QA, S-Net+CES2S, BERT+Multi-PGNet, Selector+CCG, VNET, DECAPROP, MHPGM+NOIC, ConZNet, RMR+A2D", "Bleu-1: 54.11, Bleu-4: 30.43, METEOR: 26.13, ROUGE-L: 59.87", "well-formed sentences vs concise answers" ]
191
1906.03338
Dissecting Content and Context in Argumentative Relation Analysis
When assessing relations between argumentative units (e.g., support or attack), computational systems often exploit disclosing indicators or markers that are not part of elementary argumentative units (EAUs) themselves, but are gained from their context (position in paragraph, preceding tokens, etc.). We show that this...
{ "paragraphs": [ [ "In recent years we have witnessed a great surge in activity in the area of computational argument analysis (e.g. BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 ), and the emergence of dedicated venues such as the ACL Argument Mining workshop series starting in 2014 BIBREF4 .", "Argumentati...
{ "answers": [ { "annotation_id": [ "5bd1279173e673acdbf3c6fb54244548d0a580c2" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": ...
{ "caption": [ "Figure 1: A graph representation of a topic (node w/ dashed line), two argumentative premise units (nodes w/ solid line), premise-topic relations (positive or negative) and premise-premise relations (here: attacks).", "Figure 2: Production rule extraction from constituency parse for two differ...
[ "How are the EAU text spans annotated?" ]
[ [ "1906.03338-Feature implementation-8" ] ]
[ "Answer with content missing: (Data and pre-processing section) The data is suited for our experiments because the annotators were explicitly asked to provide annotations on a clausal level." ]
193
1602.08741
Gibberish Semantics: How Good is Russian Twitter in Word Semantic Similarity Task?
The most studied and most successful language models were developed and evaluated mainly for English and other close European languages, such as French, German, etc. It is important to study applicability of these models to other languages. The use of vector space models for Russian was recently studied for multiple co...
{ "paragraphs": [ [ "Word semantic similarity task is an important part of contemporary NLP. It can be applied in many areas, like word sense disambiguation, information retrieval, information extraction and others. It has long history of improvements, starting with simple models, like bag-of-words (often w...
{ "answers": [ { "annotation_id": [ "0e0ced62aefb27fde1a0ab5b1516b4455bf569bb" ], "answer": [ { "evidence": [ "Twitter provides well-documented API, which allows to request any information about Tweets, users and their profiles, with respect to rate limits. ...
{ "caption": [ "Table 1. Properties of Twitter corpus (15 full days)", "Table 2. Properties of Twitter corpus (average on daily slices)", "Table 3. Properties of Twitter corpus (different size)", "Table 4. RSpearman for different context size", "Table 5. Comparison with current single-corpus train...
[ "Which Twitter corpus was used to train the word vectors?" ]
[ [ "1602.08741-Acquiring data-0", "1602.08741-Acquiring data-3", "1602.08741-Acquiring data-1", "1602.08741-Acquiring data-2" ] ]
[ "They collected tweets in Russian language using a heuristic query specific to Russian" ]
194
1911.12579
A New Corpus for Low-Resourced Sindhi Language with Word Embeddings
Representing words and phrases into dense vectors of real numbers which encode semantic and syntactic properties is a vital constituent in natural language processing (NLP). The success of neural network (NN) models in NLP largely rely on such dense word representations learned on the large unlabeled corpus. Sindhi is ...
{ "paragraphs": [ [ "Sindhi is a rich morphological, mutltiscript, and multidilectal language. It belongs to the Indo-Aryan language family BIBREF0, with significant cultural and historical background. Presently, it is recognized as is an official language BIBREF1 in Sindh province of Pakistan, also being t...
{ "answers": [ { "annotation_id": [ "ff8fd9518421abfced12a1541e4f26b5185fc32c" ], "answer": [ { "evidence": [ "Generally, closer words are considered more important to a word’s meaning. The word embeddings models have the ability to capture the lexical relat...
{ "caption": [ "Table 1: Comparison of existing and proposed work on Sindhi corpus construction and word embeddings.", "Figure 1: Employed preprocessing pipeline for text cleaning", "Table 2: Complete statistics of collected corpus from multiple resources.", "Figure 2: Frequency distribution of letter...
[ "How does proposed word embeddings compare to Sindhi fastText word representations?", "How many uniue words are in the dataset?" ]
[ [ "1911.12579-Word similarity comparison of Word Embeddings ::: Word pair relationship-0", "1911.12579-Word similarity comparison of Word Embeddings ::: Word pair relationship-1" ], [ "1911.12579-Statistical analysis of corpus-0", "1911.12579-8-Table2-1.png" ] ]
[ "Proposed SG model vs SINDHI FASTTEXT:\nAverage cosine similarity score: 0.650 vs 0.388\nAverage semantic relatedness similarity score between countries and their capitals: 0.663 vs 0.391", "908456 unique words are available in collected corpus." ]
195
1707.00110
Efficient Attention using a Fixed-Size Memory Representation
The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is m...
{ "paragraphs": [ [ "Sequence-to-sequence models BIBREF0 , BIBREF1 have achieved state of the art results across a wide variety of tasks, including Neural Machine Translation (NMT) BIBREF2 , BIBREF3 , text summarization BIBREF4 , BIBREF5 , speech recognition BIBREF6 , BIBREF7 , image captioning BIBREF8 , an...
{ "answers": [ { "annotation_id": [ "0e7135bdd269d4e83630b27b6ae64fbe62e9e5d4" ], "answer": [ { "evidence": [ "Table 1 shows the BLEU scores of our model on different sequence lengths while varying $K$ . This is a study of the trade-off between computational...
{ "caption": [ "Figure 1: Memory Attention model architecture. K attention vectors are predicted during encoding, and a linear combination is chosen during decoding. In our example,K=3.", "Figure 2: Surface for the position encodings.", "Table 1: BLEU scores and computation times with varyingK and sequenc...
[ "Which baseline methods are used?", "How much is the BLEU score?", "Which datasets are used in experiments?" ]
[ [ "1707.00110-Toy Copying Experiment-3", "1707.00110-Toy Copying Experiment-1" ], [ "1707.00110-4-Table1-1.png" ], [ "1707.00110-Machine Translation-0", "1707.00110-Toy Copying Experiment-0" ] ]
[ "standard parametrized attention and a non-attention baseline", "Ranges from 44.22 to 100.00 depending on K and the sequence length.", "Sequence Copy Task and WMT'17" ]
199
1909.01013
Duality Regularization for Unsupervised Bilingual Lexicon Induction
Unsupervised bilingual lexicon induction naturally exhibits duality, which results from symmetry in back-translation. For example, EN-IT and IT-EN induction can be mutually primal and dual problems. Current state-of-the-art methods, however, consider the two tasks independently. In this paper, we propose to train prima...
{ "paragraphs": [ [ "Unsupervised bilingual lexicon induction (UBLI) has been shown to benefit NLP tasks for low resource languages, including unsupervised NMT BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, information retrieval BIBREF5, BIBREF6, dependency parsing BIBREF7, and named entity recognition BIBREF...
{ "answers": [ { "annotation_id": [ "b9a984425cbc2d5d4e9ee47b1389f956badcb464" ], "answer": [ { "evidence": [ "We train $\\mathcal {F}$ and $\\mathcal {G}$ jointly and introduce two regularizers. Formally, we hope that $\\mathcal {G}(\\mathcal {F}(X))$ is si...
{ "caption": [ "Figure 1: (a) Inconsistency between primal model F and the dual model G. (b) An ideal scenario.", "Figure 2: The proposed framework. (a)X → F(X)→ G(F(X))→ X; (b) Y → G(Y )→ F(G(Y ))→ Y .", "Table 1: Accuracy on MUSE and Vecmap.", "Table 4: Accuracy (P@1) on Vecmap. The best results are...
[ "What are new best results on standard benchmark?", "How better is performance compared to competitive baselines?", "What 6 language pairs is experimented on?" ]
[ [ "1909.01013-4-Table4-1.png", "1909.01013-Experiments ::: Comparison with the State-of-the-art-1" ], [ "1909.01013-4-Table4-1.png", "1909.01013-Experiments ::: Comparison with the State-of-the-art-1" ], [ "1909.01013-Experiments ::: The Effectiveness of Dual Learning-1", "1909.01013...
[ "New best results of accuracy (P@1) on Vecmap:\nOurs-GeoMMsemi: EN-IT 50.00 IT-EN 42.67 EN-DE 51.60 DE-EN 47.22 FI-EN 39.62 EN-ES 39.47 ES-EN 36.43", "Proposed method vs best baseline result on Vecmap (Accuracy P@1):\nEN-IT: 50 vs 50\nIT-EN: 42.67 vs 42.67\nEN-DE: 51.6 vs 51.47\nDE-EN: 47.22 vs 46.96\nEN-FI: 35.8...
200
1910.10408
Controlling the Output Length of Neural Machine Translation
The recent advances introduced by neural machine translation (NMT) are rapidly expanding the application fields of machine translation, as well as reshaping the quality level to be targeted. In particular, if translations have to fit some given layout, quality should not only be measured in terms of adequacy and fluenc...
{ "paragraphs": [ [ "The sequence to sequence BIBREF0, BIBREF1 approach to Neural Machine Translation (NMT) has shown to improve quality in various translation tasks BIBREF2, BIBREF3, BIBREF4. While translation quality is normally measured in terms of correct transfer of meaning and of fluency, there are se...
{ "answers": [ { "annotation_id": [ "0f04331cbdb88dc33e06b6b970c11db7cc4e842d" ], "answer": [ { "evidence": [ "We investigate both methods, either in isolation or combined, on two translation directions (En-It and En-De) for which the length of the target is...
{ "caption": [ "Figure 1: German and Italian human and machine translations (MT) are usually longer than their English source (SRC). We investigate enhanced NMT (MT*) that can also generate translations shorter than the source length. Text in red exceeds the length of the source, while underlined words point out ...
[ "How do they enrich the positional embedding with length information", "How do they condition the output to a given target-source class?" ]
[ [ "1910.10408-Methods ::: Length Encoding Method-0", "1910.10408-Methods ::: Length Encoding Method-3", "1910.10408-Methods ::: Length Encoding Method-2", "1910.10408-Methods ::: Length Encoding Method-1" ], [ "1910.10408-Methods ::: Length Token Method-0" ] ]
[ "They introduce new trigonometric encoding which besides information about position uses additional length information (abs or relative).", "They use three groups short/normal/long translation classes to learn length token, which is in inference used to bias network to generate desired length group." ]
203
2002.00876
Torch-Struct: Deep Structured Prediction Library
The literature on structured prediction for NLP describes a rich collection of distributions and algorithms over sequences, segmentations, alignments, and trees; however, these algorithms are difficult to utilize in deep learning frameworks. We introduce Torch-Struct, a library for structured prediction designed to tak...
{ "paragraphs": [ [ "Structured prediction is an area of machine learning focusing on representations of spaces with combinatorial structure, and algorithms for inference and parameter estimation over these structures. Core methods include both tractable exact approaches like dynamic programming and spannin...
{ "answers": [ { "annotation_id": [ "83b0d2c9df28b611f74cbc625a6fa50df1bba8ae" ], "answer": [ { "evidence": [ "The library design of Torch-Struct follows the distributions API used by both TensorFlow and PyTorch BIBREF29. For each structured model in the lib...
{ "caption": [ "Figure 1: Distribution of binary trees over an 1000- token sequence. Coloring shows the marginal probabilities of every span. Torch-Struct is an optimized collection of common CRF distributions used in NLP designed to integrate with deep learning frameworks.", "Table 1: Models and algorithms i...
[ "Is this library implemented into Torch or is framework agnostic?" ]
[ [ "2002.00876-Introduction-4", "2002.00876-Introduction-6", "2002.00876-Introduction-7", "2002.00876-Introduction-5" ] ]
[ "It uses deep learning framework (pytorch)" ]
205
1905.13413
Improving Open Information Extraction via Iterative Rank-Aware Learning
Open information extraction (IE) is the task of extracting open-domain assertions from natural language sentences. A key step in open IE is confidence modeling, ranking the extractions based on their estimated quality to adjust precision and recall of extracted assertions. We found that the extraction likelihood, a con...
{ "paragraphs": [ [ "Open information extraction (IE, sekine2006demand, Banko:2007:OIE) aims to extract open-domain assertions represented in the form of $n$ -tuples (e.g., was born in; Barack Obama; Hawaii) from natural language sentences (e.g., Barack Obama was born in Hawaii). Open IE started from rule-b...
{ "answers": [ { "annotation_id": [ "23c5a7ddd1f154488e822601198303f3e02cc4f7" ], "answer": [ { "evidence": [ "Compared to using external models for confidence modeling, an advantage of the proposed method is that the base model does not change: the binary c...
{ "caption": [ "Figure 1: Iterative rank-aware learning.", "Table 1: Dataset statistics.", "Table 2: Case study of reranking effectiveness. Red for predicate and blue for arguments.", "Figure 2: AUC and F1 at different iterations.", "Table 4: AUC and F1 on OIE2016.", "Table 5: Proportions of t...
[ "How does this compare to traditional calibration methods like Platt Scaling?" ]
[ [ "1905.13413-Introduction-1", "1905.13413-Iterative Learning-0", "1905.13413-Experimental Settings-1" ] ]
[ "No reliability diagrams are provided and no explicit comparison is made between confidence scores or methods." ]
207
2003.05995
CRWIZ: A Framework for Crowdsourcing Real-Time Wizard-of-Oz Dialogues
Large corpora of task-based and open-domain conversational dialogues are hugely valuable in the field of data-driven dialogue systems. Crowdsourcing platforms, such as Amazon Mechanical Turk, have been an effective method for collecting such large amounts of data. However, difficulties arise when task-based dialogues r...
{ "paragraphs": [ [ "Recent machine learning breakthroughs in dialogue systems and their respective components have been made possible by training on publicly available large scale datasets, such as ConvAI BIBREF0, bAbI BIBREF1 and MultiWoZ BIBREF2, many of which are collected on crowdsourcing services, suc...
{ "answers": [ { "annotation_id": [ "67953a768253175e8b82edaf51cba6604a936010" ], "answer": [ { "evidence": [ "In this paper, we provide a brief survey of existing datasets and describe the CRWIZ framework for pairing crowdworkers and having half of them act...
{ "caption": [ "Table 1: Comparison of relevant recent works. In order, the columns refer to: the dataset and reference; if the dataset was generated using Wizard-of-Oz techniques; if there was a unique participant per role for the whole dialogue; if the dataset was crowdsourced; the type of interaction modality ...
[ "Is CRWIZ already used for data collection, what are the results?" ]
[ [ "2003.05995-Data Analysis ::: Subjective Data-0", "2003.05995-Data Analysis-0", "2003.05995-Data Analysis ::: Single vs Multiple Wizards-0", "2003.05995-Data Analysis ::: Single vs Multiple Wizards-1", "2003.05995-Data Analysis ::: Subjective Data-2", "2003.05995-Data Analysis ::: Limitati...
[ "Yes, CRWIZ has been used for data collection and its initial use resulted in 145 dialogues. The average time taken for the task was close to the estimate of 10 minutes, 14 dialogues (9.66%) resolved the emergency in the scenario, and these dialogues rated consistently higher in subjective and objective ratings tha...
211
1907.02636
Collecting Indicators of Compromise from Unstructured Text of Cybersecurity Articles using Neural-Based Sequence Labelling
Indicators of Compromise (IOCs) are artifacts observed on a network or in an operating system that can be utilized to indicate a computer intrusion and detect cyber-attacks in an early stage. Thus, they exert an important role in the field of cybersecurity. However, state-of-the-art IOCs detection systems rely heavily ...
{ "paragraphs": [ [ "Indicators of Compromise (IOCs) are forensic artifacts that are used as signs when a system has been compromised by an attacker or infected with a particular piece of malware. To be specific, IOCs are composed of some combinations of virus signatures, IPs, URLs or domain names of botnet...
{ "answers": [ { "annotation_id": [ "102b5f1010602ad1ea20ccdc52d330557bfc7433" ], "answer": [ { "evidence": [ "As shown in TABLE TABREF24 , we report the micro average of precision, recall and F1-score for all 11 types of labels for a baseline as well as the...
{ "caption": [ "Fig. 2. ANN model of sequence labeling for IOCs automatic identification", "TABLE I STATISTICS OF DATASETS (NUMBERS OF TRAINING / VALIDATION / TEST SET)", "TABLE II EVALUATION RESULTS (MICRO AVERAGE FOR 11 LABELS)", "TABLE III EXAMPLES OF CORRECT IDENTIFICATION BY THE PROPOSED MODEL", ...
[ "What contextual features are used?" ]
[ [ "1907.02636-Contextual Features-0" ] ]
[ "The words that can indicate the characteristics of the neighbor words as contextual keywords and generate it from the automatically extracted contextual keywords." ]
214
1605.08675
Boosting Question Answering by Deep Entity Recognition
In this paper an open-domain factoid question answering system for Polish, RAFAEL, is presented. The system goes beyond finding an answering sentence; it also extracts a single string, corresponding to the required entity. Herein the focus is placed on different approaches to entity recognition, essential for retrievin...
{ "paragraphs": [ [ "A Question Answering (QA) system is a computer program capable of understanding questions in a natural language, finding answers to them in a knowledge base and providing answers in the same language. So broadly defined task seems very hard; BIBREF0 describes it as AI-Complete, i.e. equ...
{ "answers": [ { "annotation_id": [ "3dd14ec7c6c2a4fa560f7cff98479063dda0e1c9" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 3. Question answering accuracy of RAFAEL with different entity recognition strategies: quantities only (Quant), traditional N...
{ "caption": [ "Fig. 1. Overall architecture of the QA system – RAFAEL. See descriptions of elements in text.", "Fig. 2. Outline of a question focus analysis procedure used to determine an entity type in case of ambiguous interrogative pronouns.", "Fig. 3. Example of the entity extraction process in DeepE...
[ "How is the data in RAFAEL labelled?" ]
[ [ "1605.08675-Knowledge Base Processing-4", "1605.08675-Knowledge Base Processing-5", "1605.08675-Knowledge Base Processing-3", "1605.08675-Knowledge Base Processing-2", "1605.08675-Knowledge Base Processing-1" ] ]
[ "Using a set of annotation tools such as Morfeusz, PANTERA, Spejd, NERF and Liner" ]
215
1709.08858
Polysemy Detection in Distributed Representation of Word Sense
In this paper, we propose a statistical test to determine whether a given word is used as a polysemic word or not. The statistic of the word in this test roughly corresponds to the fluctuation in the senses of the neighboring words a nd the word itself. Even though the sense of a word corresponds to a single vector, we...
{ "paragraphs": [ [ "Distributed representation of word sense provides us with the ability to perform several operations on the word. One of the most important operations on a word is to obtain the set of words whose meaning is similar to the word, or whose usage in text is similar to the word. We call this...
{ "answers": [ { "annotation_id": [ "107800957bb3f9cc126bc15bd4413355fdfe15dc" ], "answer": [ { "evidence": [ "Distributed representation of word sense provides us with the ability to perform several operations on the word. One of the most important operatio...
{ "caption": [ "TABLE I AUXILIARY VERBS, THEIR NEIGHBORING WORDS, AND SURROUNDING UNIFORMITIES. THE NEIGHBORING WORDS OF AN AUXILIARY VERB CONSIST OF OTHER AUXILIARY VERBS. THE WORD “MAY” HAS A SMALL SURROUNDING UNIFORMITY, ALTHOUGH ITS NEIGHBORING WORDS CONSIST OF AUXILIARY VERBS.", "TABLE II NAMES OF THE MO...
[ "How is the fluctuation in the sense of the word and its neighbors measured?" ]
[ [ "1709.08858-Introduction-1", "1709.08858-Introduction-0" ] ]
[ "Our method performs a statistical test to determine whether a given word is used polysemously in the text, according to the following steps:\n1) Setting N, the size of the neighbor.\n2) Choosing N neighboring words ai in the order whose angle with the vector of the given word w is the smallest.\n3) Computing the s...
216
1910.00825
Abstractive Dialog Summarization with Semantic Scaffolds
The demand for abstractive dialog summary is growing in real-world applications. For example, customer service center or hospitals would like to summarize customer service interaction and doctor-patient interaction. However, few researchers explored abstractive summarization on dialogs due to the lack of suitable datas...
{ "paragraphs": [ [ "Summarization aims to condense a piece of text to a shorter version, retaining the critical information. On dialogs, summarization has various promising applications in the real world. For instance, the automatic doctor-patient interaction summary can save doctors' massive amount of tim...
{ "answers": [ { "annotation_id": [ "d214c4bc382c51d8f0cd08b640a46c76afbbbd86" ], "answer": [ { "evidence": [ "We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator an...
{ "caption": [ "Figure 1: SPNet overview. The blue and yellow box is the user and system encoder respectively. The encoders take the delexicalized conversation as input. The slots values are aligned with their slots position. Pointing mechanism merges attention distribution and vocabulary distribution to obtain t...
[ "By how much does SPNet outperforms state-of-the-art abstractive summarization methods on evaluation metrics?", "Is it expected to have speaker role, semantic slot and dialog domain annotations in real world datasets?", "How does new evaluation metric considers critical informative entities?" ]
[ [ "1910.00825-6-Table1-1.png", "1910.00825-Results and Discussions ::: Automatic Evaluation Results-1" ], [ "1910.00825-Conclusion and Future Work-1" ], [ "1910.00825-Experimental Settings ::: Evaluation Metrics-3", "1910.00825-Experimental Settings ::: Evaluation Metrics-4", "1910.0...
[ "SPNet vs best baseline:\nROUGE-1: 90.97 vs 90.68\nCIC: 70.45 vs 70.25", "Not at the moment, but summaries can be additionaly extended with this annotations.", "Answer with content missing: (formula for CIC) it accounts for the most important information within each dialog domain. CIC can be applied to any summ...
223
1910.00458
MMM: Multi-stage Multi-task Learning for Multi-choice Reading Comprehension
Machine Reading Comprehension (MRC) for question answering (QA), which aims to answer a question given the relevant context passages, is an important way to test the ability of intelligence systems to understand human language. Multiple-Choice QA (MCQA) is one of the most difficult tasks in MRC because it often require...
{ "paragraphs": [ [ "Building a system that comprehends text and answers questions is challenging but fascinating, which can be used to test the machine's ability to understand human language BIBREF0, BIBREF1. Many machine reading comprehension (MRC) based question answering (QA) scenarios and datasets have...
{ "answers": [ { "annotation_id": [ "11e9dc8da152c948ba3f0ed165402dffad6fae49" ], "answer": [ { "evidence": [ "We first evaluate our method on the DREAM dataset. The results are summarized in Table TABREF16. In the table, we first report the accuracy of the ...
{ "caption": [ "Table 1: Data samples of DREAM dataset. ( √ : the correct answer)", "Figure 1: Model architecture. “Encoder”is a pre-trained sentence encoder such as BERT. “Classifier” is a top-level classifier.", "Figure 2: Multi-stage and multi-task fine-tuning strategy.", "Table 2: Statistics of MC...
[ "What are state of the art methods MMM is compared to?" ]
[ [ "1910.00458-4-Table3-1.png" ] ]
[ "FTLM++, BERT-large, XLNet" ]
225
2001.11268
Data Mining in Clinical Trial Text: Transformers for Classification and Question Answering Tasks
This research on data extraction methods applies recent advances in natural language processing to evidence synthesis based on medical texts. Texts of interest include abstracts of clinical trials in English and in multilingual contexts. The main focus is on information characterized via the Population, Intervention, C...
{ "paragraphs": [ [ "Systematic reviews (SR) of randomized controlled trials (RCTs) are regarded as the gold standard for providing information about the effects of interventions to healthcare practitioners, policy makers and members of the public. The quality of these reviews is ensured through a strict me...
{ "answers": [ { "annotation_id": [ "11ea0b3864122600cc8ab3c6e1d34caea0d87c8c" ], "answer": [ { "evidence": [ "In this work we investigate state-of-the-art methods for language modelling and sentence classification. Our contributions are centred around devel...
{ "caption": [ "Table 1: Classes for the sentence classification task.", "Figure 1: Colour coded example for a population entity annotation, converted to SQuAD v.2 format. Combined data are used to train and evaluate the system.", "Figure 2: Visualization of training sentences using BERTbase. The x and y-...
[ "What are the problems related to ambiguity in PICO sentence prediction tasks?" ]
[ [ "2001.11268-6-Figure2-1.png", "2001.11268-RESULTS ::: Feature representation and contextualization-2" ] ]
[ "Some sentences are associated to ambiguous dimensions in the hidden state output" ]
226
1706.07179
RelNet: End-to-End Modeling of Entities & Relations
We introduce RelNet: a new model for relational reasoning. RelNet is a memory augmented neural network which models entities as abstract memory slots and is equipped with an additional relational memory which models relations between all memory pairs. The model thus builds an abstract knowledge graph on the entities an...
{ "paragraphs": [ [ "Reasoning about entities and their relations is an important problem for achieving general artificial intelligence. Often such problems are formulated as reasoning over graph-structured representation of knowledge. Knowledge graphs, for example, consist of entities and relations between...
{ "answers": [ { "annotation_id": [ "a5d0953d56d8cd11ea834da09e2416aee83102ea" ], "answer": [ { "evidence": [ "Recently, BIBREF17 proposed a dynamic memory based neural network for implicitly modeling the state of entities present in the text for question an...
{ "caption": [ "Figure 1: RelNet Model: The model represents the state of the world as a neural turing machine with relational memory. At each time step, the model reads the sentence into an encoding vector and updates both entity memories and all edges between them representing the relations.", "Table 1: Mea...
[ "How is knowledge stored in the memory?" ]
[ [ "1706.07179-RelNet Model-1" ] ]
[ "entity memory and relational memory." ]
227
1909.08824
Modeling Event Background for If-Then Commonsense Reasoning Using Context-aware Variational Autoencoder
Understanding event and event-centered commonsense reasoning are crucial for natural language processing (NLP). Given an observed event, it is trivial for human to infer its intents and effects, while this type of If-Then reasoning still remains challenging for NLP systems. To facilitate this, a If-Then commonsense rea...
{ "paragraphs": [ [ "Recently, event-centered commonsense knowledge has attracted much attention BIBREF0, BIBREF1, BIBREF2, BIBREF3, because of understanding events is an important component of NLP. Given a daily-life event, human can easily understand it and reason about its causes, effects, and so on. How...
{ "answers": [ { "annotation_id": [ "7f7d9a78c51f1de52959ee1634d8d01fc56c9efd" ], "answer": [ { "evidence": [ "We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which...
{ "caption": [ "Figure 1: A illustration of two challenging problems in IfThen reasoning. (a) Given an observed event, the feelings about this event could be multiple. (b) Background knowledge is need for generating reasonable inferences, which is absent in the dataset (marked by dashed lines).", "Table 1: Hi...
[ "How do they measure the diversity of inferences?", "By how much do they improve the accuracy of inferences over state-of-the-art methods?", "How does the context-aware variational autoencoder learn event background information?" ]
[ [ "1909.08824-Experiments ::: Evaluation Metrics ::: Automatic Evaluation-0" ], [ "1909.08824-Experiments ::: Evaluation Metrics ::: Automatic Evaluation-0", "1909.08824-7-Table6-1.png", "1909.08824-6-Table4-1.png" ], [ "1909.08824-Introduction-5" ] ]
[ "by number of distinct n-grams", "ON Event2Mind, the accuracy of proposed method is improved by absolute BLUE 2.9, 10.87, 1.79 for xIntent, xReact and oReact respectively.\nOn Atomic dataset, the accuracy of proposed method is improved by absolute BLUE 3.95. 4.11, 4.49 for xIntent, xReact and oReact.respect...
228
1701.03214
An Empirical Comparison of Simple Domain Adaptation Methods for Neural Machine Translation
In this paper, we propose a novel domain adaptation method named"mixed fine tuning"for neural machine translation (NMT). We combine two existing approaches namely fine tuning and multi domain NMT. We first train an NMT model on an out-of-domain parallel corpus, and then fine tune it on a parallel corpus which is a mix ...
{ "paragraphs": [ [ "One of the most attractive features of neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 is that it is possible to train an end to end system without the need to deal with word alignments, translation rules and complicated decoding algorithms, which are a characteristic of st...
{ "answers": [ { "annotation_id": [ "f92d4930c3a5af4cac3ed3b914ec9a554dfeade4" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Domain adaptation results (BLEU-4 scores) for IWSLT-CE using NTCIR-CE." ], "extractive_spans": [], ...
{ "caption": [ "Figure 1: Fine tuning for domain adaptation", "Figure 2: Tag based multi domain NMT", "Table 1: Domain adaptation results (BLEU-4 scores) for IWSLT-CE using NTCIR-CE.", "Table 2: Domain adaptation results (BLEU-4 scores) for WIKI-CJ using ASPEC-CJ." ], "file": [ "2-Figure1-1.pn...
[ "How much improvement does their method get over the fine tuning baseline?" ]
[ [ "1701.03214-3-Table1-1.png" ] ]
[ "0.08 points on the 2011 test set, 0.44 points on the 2012 test set, 0.42 points on the 2013 test set for IWSLT-CE." ]
230
1611.02550
Discriminative Acoustic Word Embeddings: Recurrent Neural Network-Based Approaches
Acoustic word embeddings --- fixed-dimensional vector representations of variable-length spoken word segments --- have begun to be considered for tasks such as speech recognition and query-by-example search. Such embeddings can be learned discriminatively so that they are similar for speech segments corresponding to th...
{ "paragraphs": [ [ "Many speech processing tasks – such as automatic speech recognition or spoken term detection – hinge on associating segments of speech signals with word labels. In most systems developed for such tasks, words are broken down into sub-word units such as phones, and models are built for t...
{ "answers": [ { "annotation_id": [ "1fd4f3fbe7b6046c29581d726d5cfe3e080fd7c8" ], "answer": [ { "evidence": [ "An acoustic word embedding is a function that takes as input a speech segment corresponding to a word, INLINEFORM0 , where each INLINEFORM1 is a ve...
{ "caption": [ "Fig. 1: LSTM-based acoustic word embedding model. For GRUbased models, the structure is the same, but the LSTM cells are replaced with GRU cells, and there is no cell activation vector; the recurrent connections only carry the hidden state vector hlt.", "Fig. 2: Effect of embedding dimensional...
[ "By how much do they outpeform previous results on the word discrimination task?" ]
[ [ "1611.02550-5-Table1-1.png" ] ]
[ "Their best average precision tops previous best result by 0.202" ]
233
1601.06068
Paraphrase Generation from Latent-Variable PCFGs for Semantic Parsing
One of the limitations of semantic parsing approaches to open-domain question answering is the lexicosyntactic gap between natural language questions and knowledge base entries -- there are many ways to ask a question, all with the same answer. In this paper we propose to bridge this gap by generating paraphrases of th...
{ "paragraphs": [ [ "Semantic parsers map sentences onto logical forms that can be used to query databases BIBREF0 , BIBREF1 , instruct robots BIBREF2 , extract information BIBREF3 , or describe visual scenes BIBREF4 . In this paper we consider the problem of semantically parsing questions into Freebase log...
{ "answers": [ { "annotation_id": [ "208951f0d5f93c878368122d70fd94c337104a5e" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": false, "yes_no":...
{ "caption": [ "Figure 1: An example word lattice for the question What language do people in Czech Republic speak? using the lexical and phrasal rules from the PPDB.", "Figure 2: Trees used for bi-layered L-PCFG training. The questions what day is nochebuena, when is nochebuena and when is nochebuena celebra...
[ "How many paraphrases are generated per question?" ]
[ [ "1601.06068-Implementation Details-0" ] ]
[ "10*n paraphrases, where n depends on the number of paraphrases that contain the entity mention spans" ]
235
1709.07916
Characterizing Diabetes, Diet, Exercise, and Obesity Comments on Twitter
Social media provide a platform for users to express their opinions and share information. Understanding public health opinions on social media, such as Twitter, offers a unique approach to characterizing common health issues such as diabetes, diet, exercise, and obesity (DDEO), however, collecting and analyzing a larg...
{ "paragraphs": [ [ "The global prevalence of obesity has doubled between 1980 and 2014, with more than 1.9 billion adults considered as overweight and over 600 million adults considered as obese in 2014 BIBREF0 . Since the 1970s, obesity has risen 37 percent affecting 25 percent of the U.S. adults BIBREF1 ...
{ "answers": [ { "annotation_id": [ "13493df9ec75ae877c9904e23729ff119814671f" ], "answer": [ { "evidence": [ "This phase collected tweets using Twitter's Application Programming Interfaces (API) BIBREF43 . Within the Twitter API, diabetes, diet, exercise, a...
{ "caption": [ "Figure 1: A Sample of Tweets", "Table 1: DDEO Queries", "Table 2: DDEO Topics and Subtopics - Diabetes, Diet, Exercise, and Obesity are shown with italic and underline styles in subtopics", "Figure 2: DDEO Correlation P-Value", "Table 3: Topics Examples" ], "file": [ "3-Fig...
[ "How strong was the correlation between exercise and diabetes?", "How were topics of interest about DDEO identified?" ]
[ [ "1709.07916-6-Figure2-1.png", "1709.07916-Results-5" ], [ "1709.07916-Topic Discovery-1", "1709.07916-Topic Discovery-3", "1709.07916-Topic Discovery-0" ] ]
[ "weak correlation with p-value of 0.08", "using topic modeling model Latent Dirichlet Allocation (LDA)" ]
236
1909.00154
Rethinking travel behavior modeling representations through embeddings
This paper introduces the concept of travel behavior embeddings, a method for re-representing discrete variables that are typically used in travel demand modeling, such as mode, trip purpose, education level, family type or occupation. This re-representation process essentially maps those variables into a latent space ...
{ "paragraphs": [ [ "Since their early days, representation in random utility behavior models has followed generally quite clear principles. For example, numeric quantities like travel time and cost may be directly used or transformed depending on observed non-linear efects (e.g. using log). Numeric variabl...
{ "answers": [ { "annotation_id": [ "5ac34eb67f1f8386ca9654d0d56e6e970c8f6cde" ], "answer": [ { "evidence": [ "The Swissmetro dataset consists of survey data collected on the trains between St. Gallen and Geneva, Switzerland, during March 1998. According to ...
{ "caption": [ "Figure 1: The skip gram architecture [7]", "Figure 2: Visualization of a subset of words from FastText word embeddings database [8]", "Figure 3: Some classical examples of embeddings algebra [9]", "Figure 4: The general idea", "Figure 5: Travel embeddings model", "Figure 6: Tra...
[ "How do their train their embeddings?", "How do they model travel behavior?", "How do their interpret the coefficients?" ]
[ [ "1909.00154-Travel behaviour embeddings ::: The general idea-5", "1909.00154-Travel behaviour embeddings ::: The general idea-9" ], [ "1909.00154-Travel behaviour embeddings-0" ], [ "1909.00154-An experiment with mode choice-1" ] ]
[ "The embeddings are learned several times using the training set, then the average is taken.", "The data from collected travel surveys is used to model travel behavior.", "The coefficients are projected back to the dummy variable space." ]
237
1908.05434
Sex Trafficking Detection with Ordinal Regression Neural Networks
Sex trafficking is a global epidemic. Escort websites are a primary vehicle for selling the services of such trafficking victims and thus a major driver of trafficker revenue. Many law enforcement agencies do not have the resources to manually identify leads from the millions of escort ads posted across dozens of publi...
{ "paragraphs": [ [ "Globally, human trafficking is one of the fastest growing crimes and, with annual profits estimated to be in excess of 150 billion USD, it is also among the most lucrative BIBREF0 . Sex trafficking is a form of human trafficking which involves sexual exploitation through coercion. Recen...
{ "answers": [ { "annotation_id": [ "1384b1e2ddc8d8417896cb3664c4586037474138" ], "answer": [ { "evidence": [ "All models are trained and evaluated using the same (w.r.t. data shuffle and split) 10-fold cross-validation (CV) on Trafficking-10k, except for HT...
{ "caption": [ "Figure 1: Overview of the ordinal regression neural network for text input. H represents a hidden state in a gated-feedback recurrent neural network.", "Figure 2: Ordinal regression layer with order penalty.", "Table 1: Description and distribution of labels in Trafficking-10K.", "Tabl...
[ "By how much do they outperform previous state-of-the-art models?" ]
[ [ "1908.05434-Comparison with Baselines-3", "1908.05434-7-Table2-1.png", "1908.05434-Comparison with Baselines-4" ] ]
[ "Proposed ORNN has 0.769, 1.238, 0.818, 0.772 compared to 0.778, 1.244, 0.813, 0.781 of best state of the art result on Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.)" ]
238
1909.02480
FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow
Most sequence-to-sequence (seq2seq) models are autoregressive; they generate each token by conditioning on previously generated tokens. In contrast, non-autoregressive seq2seq models generate all tokens in one pass, which leads to increased efficiency through parallel processing on hardware such as GPUs. However, direc...
{ "paragraphs": [ [ "Neural sequence-to-sequence (seq2seq) models BIBREF0, BIBREF1, BIBREF2, BIBREF3 generate an output sequence $\\mathbf {y} = \\lbrace y_1, \\ldots , y_T\\rbrace $ given an input sequence $\\mathbf {x} = \\lbrace x_1, \\ldots , x_{T^{\\prime }}\\rbrace $ using conditional probabilities $P...
{ "answers": [ { "annotation_id": [ "e452412e9567ff9c42bc5c5df5aa2294ce83ef7a" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": ...
{ "caption": [ "Figure 1: (a) Autoregressive (b) non-autoregressive and (c) our proposed sequence generation models. x is the source, y is the target, and z are latent variables.", "Figure 2: Neural architecture of FlowSeq, including the encoder, the decoder and the posterior networks, together with the multi...
[ "What is the performance difference between proposed method and state-of-the-arts on these datasets?" ]
[ [ "1909.02480-7-Table2-1.png", "1909.02480-Experiments ::: Main Results-3" ] ]
[ "Difference is around 1 BLEU score lower on average than state of the art methods." ]
242
2004.02393
Learning to Recover Reasoning Chains for Multi-Hop Question Answering via Cooperative Games
We propose the new problem of learning to recover reasoning chains from weakly supervised signals, i.e., the question-answer pairs. We propose a cooperative game approach to deal with this problem, in which how the evidence passages are selected and how the selected passages are connected are handled by two models that...
{ "paragraphs": [ [ "NLP tasks that require multi-hop reasoning have recently enjoyed rapid progress, especially on multi-hop question answering BIBREF0, BIBREF1, BIBREF2. Advances have benefited from rich annotations of supporting evidence, as in the popular multi-hop QA and relation extraction benchmarks,...
{ "answers": [ { "annotation_id": [ "8eefbea2f3cfcf402f9d072e674b0300e54adc66" ], "answer": [ { "evidence": [ "Method ::: Passage Ranking Model", "The key component of our framework is the Ranker model, which is provided with a question $q$ and $...
{ "caption": [ "Figure 1: An example of reasoning chains in HotpotQA (2- hop) and MedHop (3-hop). HotpotQA provides only supporting passages {P3, P9}, without order and linking information.", "Figure 2: Model overview. The cooperative Ranker and Reasoner are trained alternatively. The Ranker selects a passage...
[ "What benchmarks are created?" ]
[ [ "2004.02393-Definition of Chain Accuracy-0", "2004.02393-Definition of Chain Accuracy-2", "2004.02393-Definition of Chain Accuracy-1" ] ]
[ "Answer with content missing: (formula) The accuracy is defined as the ratio # of correct chains predicted to # of evaluation samples" ]
244
2004.01694
A Set of Recommendations for Assessing Human-Machine Parity in Language Translation
The quality of machine translation has increased remarkably over the past years, to the degree that it was found to be indistinguishable from professional human translation in a number of empirical investigations. We reassess Hassan et al.'s 2018 investigation into Chinese to English news translation, showing that the ...
{ "paragraphs": [ [ "Machine translation (MT) has made astounding progress in recent years thanks to improvements in neural modelling BIBREF0, BIBREF1, BIBREF2, and the resulting increase in translation quality is creating new challenges for MT evaluation. Human evaluation remains the gold standard, but the...
{ "answers": [ { "annotation_id": [ "1ddd2172cbc25dc21125633fb2e28aec5c10e7d3" ], "answer": [ { "evidence": [ "Our paper investigates three aspects of human MT evaluation, with a special focus on assessing human–machine parity: the choice of raters, the use ...
{ "caption": [ "Table 1: Ranks and TrueSkill scores (the higher the better) of one human (HA) and two machine translations (MT1, MT2) for evaluations carried out by expert and non-expert translators. An asterisk next to a translation indicates that this translation is significantly better than the one in the next...
[ "What percentage fewer errors did professional translations make?", "What was the weakness in Hassan et al's evaluation design?" ]
[ [ "2004.01694-12-Table5-1.png", "2004.01694-Reference Translations ::: Quality-7" ], [ "2004.01694-Background ::: Assessing Human–Machine Parity ::: Reference Translations-0", "2004.01694-Background ::: Assessing Human–Machine Parity ::: Choice of Raters-0", "2004.01694-Background ::: Assess...
[ "36%", "MT developers to which crowd workers were compared are usually not professional translators, evaluation of sentences in isolation prevents raters from detecting translation errors, used not originally written Chinese test set\n" ]
245
1909.02635
Effective Use of Transformer Networks for Entity Tracking
Tracking entities in procedural language requires understanding the transformations arising from actions on entities as well as those entities' interactions. While self-attention-based pre-trained language encoders like GPT and BERT have been successfully applied across a range of natural language understanding tasks, ...
{ "paragraphs": [ [ "Transformer based pre-trained language models BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 have been shown to perform remarkably well on a range of tasks, including entity-related tasks like coreference resolution BIBREF5 and named entity recognition BIBREF0. This performance has been ge...
{ "answers": [ { "annotation_id": [ "1516c86c36ecb2bb8a543465d6ac12220ed1a226" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": ...
{ "caption": [ "Figure 1: Process Examples from (a) RECIPES as a binary classification task of ingredient detection, and (b) PROPARA as a structured prediction task of identifying state change sequences. Both require cross-sentence reasoning, such as knowing what components are in a mixture and understanding verb...
[ "What evidence do they present that the model attends to shallow context clues?", "In what way is the input restructured?" ]
[ [ "1909.02635-Analysis ::: Gradient based Analysis-0" ], [ "1909.02635-Entity-Conditioned Models-1", "1909.02635-Entity-Conditioned Models ::: Sentence Level vs. Document Level-0" ] ]
[ "Using model gradients with respect to input features they presented that the most important model inputs are verbs associated with entities which shows that the model attends to shallow context clues", "In four entity-centric ways - entity-first, entity-last, document-level and sentence-level" ]
247
1904.00648
Recognizing Musical Entities in User-generated Content
Recognizing Musical Entities is important for Music Information Retrieval (MIR) since it can improve the performance of several tasks such as music recommendation, genre classification or artist similarity. However, most entity recognition systems in the music domain have concentrated on formal texts (e.g. artists' bio...
{ "paragraphs": [ [ "The increasing use of social media and microblogging services has broken new ground in the field of Information Extraction (IE) from user-generated content (UGC). Understanding the information contained in users' content has become one of the main goal for many applications, due to the ...
{ "answers": [ { "annotation_id": [ "15418edd8c72bc8bc3efceb68fa9202d76da15a7" ], "answer": [ { "evidence": [ "The performances of the NER experiments are reported separately for three different parts of the system proposed.", "Table 6 presents t...
{ "caption": [ "Table 2. Example of entities annotated and corresponding formal forms, from the user-generated tweet (1) in Table 1.", "Table 3. Examples of bot-generated tweets.", "Table 4. Tokens’ distributions within the two datasets: user-generated tweets (top) and bot-generated tweets (bottom)", ...
[ "What language is the Twitter content in?" ]
[ [ "1904.00648-Introduction-1" ] ]
[ "English" ]
248
1711.11221
Modeling Coherence for Neural Machine Translation with Dynamic and Topic Caches
Sentences in a well-formed text are connected to each other via various links to form the cohesive structure of the text. Current neural machine translation (NMT) systems translate a text in a conventional sentence-by-sentence fashion, ignoring such cross-sentence links and dependencies. This may lead to generate an in...
{ "paragraphs": [ [ "In the literature, several cache-based translation models have been proposed for conventional statistical machine translation, besides traditional n-gram language models and neural language models. In this section, we will first introduce related work in cache-based language models and ...
{ "answers": [ { "annotation_id": [ "24b8501e77da8e331182557dea36f83fd31de3e7" ], "answer": [ { "evidence": [ "We want to further study how the proposed cache-based neural model influence coherence in document translation. For this, we follow Lapata2005Autom...
{ "caption": [ "Figure 1: Architecture of NMT with the neural cache model. Pcache is the probability for a next target word estimated by the cache-based neural model.", "Figure 2: Schematic diagram of the topic projection during the testing process.", "Figure 3: Architecture of the cache model.", "Tab...
[ "What evaluations did the authors use on their system?" ]
[ [ "1711.11221-10-Table6-1.png", "1711.11221-8-Table1-1.png", "1711.11221-9-Table3-1.png" ] ]
[ "BLEU scores, exact matches of words in both translations and topic cache, and cosine similarities of adjacent sentences for coherence." ]
255
1912.07025
Indiscapes: Instance Segmentation Networks for Layout Parsing of Historical Indic Manuscripts
Historical palm-leaf manuscript and early paper documents from Indian subcontinent form an important part of the world's literary and cultural heritage. Despite their importance, large-scale annotated Indic manuscript image datasets do not exist. To address this deficiency, we introduce Indiscapes, the first ever datas...
{ "paragraphs": [ [ "The collection and analysis of historical document images is a key component in the preservation of culture and heritage. Given its importance, a number of active research efforts exist across the world BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. In this paper, we focus on pal...
{ "answers": [ { "annotation_id": [ "16ff9c9f07a060d809fdb92a6e6044c47a21faf3" ], "answer": [ { "evidence": [ "FLOAT SELECTED: TABLE IV: Class-wise average IoUs and per-pixel accuracies on the test set. Refer to Table I for full names of abbreviated region t...
{ "caption": [ "Fig. 1: The five images on the left, enclosed by pink dotted line, are from the BHOOMI palm leaf manuscript collection while the remaining images (enclosed by blue dotted line) are from the ’Penn-in-Hand’ collection (refer to Section III). Note the inter-collection differences, closely spaced and ...
[ "What accuracy does CNN model achieve?", "How many documents are in the Indiscapes dataset?" ]
[ [ "1912.07025-3-TableI-1.png", "1912.07025-5-TableIV-1.png" ], [ "1912.07025-3-TableIII-1.png" ] ]
[ "Combined per-pixel accuracy for character line segments is 74.79", "508" ]
256
1709.01256
Semantic Document Distance Measures and Unsupervised Document Revision Detection
In this paper, we model the document revision detection problem as a minimum cost branching problem that relies on computing document distances. Furthermore, we propose two new document distance measures, word vector-based Dynamic Time Warping (wDTW) and word vector-based Tree Edit Distance (wTED). Our revision detecti...
{ "paragraphs": [ [ "It is a common habit for people to keep several versions of documents, which creates duplicate data. A scholarly article is normally revised several times before being published. An academic paper may be listed on personal websites, digital conference libraries, Google Scholar, etc. In ...
{ "answers": [ { "annotation_id": [ "8b0add840d20bf740a040223502d86b77dee5181" ], "answer": [ { "evidence": [ "We use precision, recall and F-measure to evaluate the detected revisions. A true positive case is a correctly identified revision. A false positiv...
{ "caption": [ "Figure 1: Revision network visualization", "Figure 2: Setting τ", "Figure 3: Corpora simulation", "Figure 4: Precision, recall and F-measure on the Wikipedia revision dumps", "Table 1: A simulated data set", "Figure 5: Average precision, recall and F-measure on the simulated da...
[ "What are simulated datasets collected?" ]
[ [ "1709.01256-Data Sets-3", "1709.01256-Data Sets-4" ] ]
[ "There are 6 simulated datasets collected which is initialised with a corpus of size 550 and simulated by generating new documents from Wikipedia extracts and replacing existing documents" ]
257
1902.11049
Evaluating Rewards for Question Generation Models
Recent approaches to question generation have used modifications to a Seq2Seq architecture inspired by advances in machine translation. Models are trained using teacher forcing to optimise only the one-step-ahead prediction. However, at test time, the model is asked to generate a whole sequence, causing errors to propa...
{ "paragraphs": [ [ "Posing questions about a document in natural language is a crucial aspect of the effort to automatically process natural language data, enabling machines to ask clarification questions BIBREF0 , become more robust to queries BIBREF1 , and to act as automatic tutors BIBREF2 .", "Re...
{ "answers": [ { "annotation_id": [ "17e6b7b37247467814f2f6f83917ca3c8623aedd" ], "answer": [ { "evidence": [ "For the human evaluation, we follow the standard approach in evaluating machine translation systems BIBREF23 , as used for question generation by B...
{ "caption": [ "Table 1: Example generated questions for various fine-tuning objectives. The answer is highlighted in bold. The model trained on a QA reward has learned to simply point at the answer and exploit the QA model, while the model trained on a language model objective has learned to repeat common phrase...
[ "What human evaluation metrics were used in the paper?" ]
[ [ "1902.11049-Evaluation-1" ] ]
[ "rating questions on a scale of 1-5 based on fluency of language used and relevance of the question to the context" ]
259
1905.06906
Gated Convolutional Neural Networks for Domain Adaptation
Domain Adaptation explores the idea of how to maximize performance on a target domain, distinct from source domain, upon which the classifier was trained. This idea has been explored for the task of sentiment analysis extensively. The training of reviews pertaining to one domain and evaluation on another domain is wide...
{ "paragraphs": [ [ "With the advancement in technology and invention of modern web applications like Facebook and Twitter, users started expressing their opinions and ideologies at a scale unseen before. The growth of e-commerce companies like Amazon, Walmart have created a revolutionary impact in the fiel...
{ "answers": [ { "annotation_id": [ "cb93eb69ccaf6c5aeb4a0872eca940f6e7c3de73" ], "answer": [ { "evidence": [ "Amazon Reviews Dataset BIBREF24 is a large dataset with millions of reviews from different product categories. For our experiments, we consider a s...
{ "caption": [ "Fig. 1: Architecture of the proposed model", "Fig. 2: Variations in gates of the proposed GCN architecture.", "Table 1: Average training time for all the models on ARD", "Table 2: Accuracy scores on Multi Domain Dataset.", "Table 3: Accuracy scores on Multi Domain Dataset.", "T...
[ "For the purposes of this paper, how is something determined to be domain specific knowledge?" ]
[ [ "1905.06906-Datasets-1" ] ]
[ "reviews under distinct product categories are considered specific domain knowledge" ]
260
1809.09795
Deep contextualized word representations for detecting sarcasm and irony
Predicting context-dependent and non-literal utterances like sarcastic and ironic expressions still remains a challenging task in NLP, as it goes beyond linguistic patterns, encompassing common sense and shared knowledge as crucial components. To capture complex morpho-syntactic features that can usually serve as indic...
{ "paragraphs": [ [ "Sarcastic and ironic expressions are prevalent in social media and, due to the tendency to invert polarity, play an important role in the context of opinion mining, emotion recognition and sentiment analysis BIBREF0 . Sarcasm and irony are two closely related linguistic phenomena, with ...
{ "answers": [ { "annotation_id": [ "f0359b9fa0253f4c525798ade165f7b481f56f79" ], "answer": [ { "evidence": [ "We test our proposed approach for binary classification on either sarcasm or irony, on seven benchmark datasets retrieved from different media sour...
{ "caption": [ "Table 1: Benchmark datasets: Tweets, Reddit posts and online debates for sarcasm and irony detection.", "Table 2: Summary of our obtained results." ], "file": [ "3-Table1-1.png", "4-Table2-1.png" ] }
[ "What type of model are the ELMo representations used in?" ]
[ [ "1809.09795-Proposed Approach-2", "1809.09795-Proposed Approach-3" ] ]
[ "A bi-LSTM with max-pooling on top of it" ]
261
2003.01769
Phonetic Feedback for Speech Enhancement With and Without Parallel Speech Data
While deep learning systems have gained significant ground in speech enhancement research, these systems have yet to make use of the full potential of deep learning systems to provide high-level feedback. In particular, phonetic feedback is rare in speech enhancement research even though it includes valuable top-down i...
{ "paragraphs": [ [ "Typical speech enhancement techniques focus on local criteria for improving speech intelligibility and quality. Time-frequency prediction techniques use local spectral quality estimates as an objective function; time domain methods directly predict clean output with a potential spectral...
{ "answers": [ { "annotation_id": [ "186ba39454e05f9639db6260d2b306a1537e7783" ], "answer": [ { "evidence": [ "Several recent works have investigated jointly training the acoustic model with a masking speech enhancement model BIBREF11, BIBREF12, BIBREF13, bu...
{ "caption": [ "Fig. 1. Operations are listed inside shapes, the circles are operations that are not parameterized, the rectangles represent parameterized operations. The gray operations are not trained, meaning the loss is backpropagated without any updates until the front-end denoiser is reached.", "Fig. 2....
[ "By how much does using phonetic feedback improve state-of-the-art systems?" ]
[ [ "2003.01769-4-Table2-1.png", "2003.01769-Experiments ::: With parallel data-0" ] ]
[ "Improved AECNN-T by 2.1 and AECNN-T-SM BY 0.9" ]
263
1806.09103
Subword-augmented Embedding for Cloze Reading Comprehension
Representation learning is the foundation of machine reading comprehension. In state-of-the-art models, deep learning methods broadly use word and character level representations. However, character is not naturally the minimal linguistic unit. In addition, with a simple concatenation of character and word embedding, p...
{ "paragraphs": [ [ "This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/", "A recent hot challenge is to train machines to read and comprehend human languages. Towards this end, various machine reading compr...
{ "answers": [ { "annotation_id": [ "60c9b737810c6bf6d0978eadbb33409f3b4734ff" ], "answer": [ { "evidence": [ "In our experiments, the short list is determined according to the word frequency. Concretely, we sort the vocabulary according to the word frequenc...
{ "caption": [ "Figure 1: Architecture of the proposed Subword-augmented Embedding Reader (SAW Reader).", "Table 1: Data statistics of CMRC-2017, PD and CFT.", "Table 2: Accuracy on CMRC-2017 dataset. Results marked with † are from the latest official CMRC2017 Leaderboard 7. The best results are in bold f...
[ "what are the baselines?" ]
[ [ "1806.09103-Main Results-1", "1806.09103-6-Table2-1.png", "1806.09103-7-Table3-1.png" ] ]
[ "AS Reader, GA Reader, CAS Reader" ]
271
1911.13087
Kurdish (Sorani) Speech to Text: Presenting an Experimental Dataset
We present an experimental dataset, Basic Dataset for Sorani Kurdish Automatic Speech Recognition (BD-4SK-ASR), which we used in the first attempt in developing an automatic speech recognition for Sorani Kurdish. The objective of the project was to develop a system that automatically could recognize simple sentences ba...
{ "paragraphs": [ [ "Kurdish language processing requires endeavor by interested researchers and scholars to overcome with a large gap which it has regarding the resource scarcity. The areas that need attention and the efforts required have been addressed in BIBREF0.", "The Kurdish speech recognition ...
{ "answers": [ { "annotation_id": [ "da0fc36116ec0c88876ce022a0c985ce91bedf28" ], "answer": [ { "evidence": [ "The BD-4SK-ASR Dataset ::: The Language Model", "We created the language from the transcriptions. The model was created using CMUSphinx...
{ "caption": [ "Figure 1: The Sorani sounds along with their phoneme representation." ], "file": [ "3-Figure1-1.png" ] }
[ "What are the results of the experiment?", "How was the dataset collected?", "How many annotators participated?" ]
[ [ "1911.13087-The BD-4SK-ASR Dataset ::: The Language Model-0" ], [ "1911.13087-The BD-4SK-ASR Dataset-0" ], [ "1911.13087-The BD-4SK-ASR Dataset ::: The Narration Files-0" ] ]
[ "They were able to create a language model from the dataset, but did not test.", "extracted text from Sorani Kurdish books of primary school and randomly created sentences", "1" ]
272
1711.02013
Neural Language Modeling by Jointly Learning Syntax and Lexicon
We propose a neural language model capable of unsupervised syntactic structure induction. The model leverages the structure information to form better semantic representations and better language modeling. Standard recurrent neural networks are limited by their structure and fail to efficiently use syntactic informatio...
{ "paragraphs": [ [ "Linguistic theories generally regard natural language as consisting of two part: a lexicon, the complete set of all possible words in a language; and a syntax, the set of rules, principles, and processes that govern the structure of sentences BIBREF0 . To generate a proper sentence, tok...
{ "answers": [ { "annotation_id": [ "22b7cf887e3387634b67deae37c4d197a85c1f98" ], "answer": [ { "evidence": [ "In Figure FIGREF32 , we visualize the syntactic distance estimated by the Parsing Network, while reading three different sequences from the PTB tes...
{ "caption": [ "Figure 1: Hard arrow represents syntactic tree structure and parent-to-child dependency relation, dash arrow represents dependency relation between siblings", "Figure 2: Proposed model architecture, hard line indicate valid connection in Reading Network, dash line indicate valid connection in ...
[ "How do they show their model discovers underlying syntactic structure?", "How do they measure performance of language model tasks?" ]
[ [ "1711.02013-Character-level Language Model-3" ], [ "1711.02013-8-Table1-1.png", "1711.02013-Word-level Language Model-4" ] ]
[ "By visualizing syntactic distance estimated by the parsing network", "BPC, Perplexity" ]
274
1909.00183
Extracting information from free text through unsupervised graph-based clustering: an application to patient incident records
The large volume of text in electronic healthcare records often remains underused due to a lack of methodologies to extract interpretable content. Here we present an unsupervised framework for the analysis of free text that combines text-embedding with paragraph vectors and graph-theoretical multiscale community detect...
{ "paragraphs": [ [ "", "The vast amounts of data collected by healthcare providers in conjunction with modern data analytics present a unique opportunity to improve the quality and safety of medical care for patient benefit BIBREF1. Much recent research in this area has been on personalised medicine,...
{ "answers": [ { "annotation_id": [ "82453702db84beeb6427825f2997da5bb04df935" ], "answer": [ { "evidence": [ "As a further application of our work, we have carried out a supervised classification task aimed at predicting the degree of harm of an incident di...
{ "caption": [ "Fig. 1: Pipeline for data analysis contains training of the text embedding model along with the two methods we showcase in this work. First is the graph-based unsupervised clustering of documents at different levels of resolution to find topic clusters only from the free text descriptions of hospi...
[ "How are content clusters used to improve the prediction of incident severity?", "What cluster identification method is used in this paper?" ]
[ [ "1909.00183-Graph-based framework for text analysis and clustering ::: Supervised Classification for Degree of Harm-0" ], [ "1909.00183-Graph-based framework for text analysis and clustering-2" ] ]
[ "they are used as additional features in a supervised classification task", "A combination of Minimum spanning trees, K-Nearest Neighbors and Markov Stability BIBREF15, BIBREF16, BIBREF17, BIBREF18" ]
275
1801.09030
Exploration on Generating Traditional Chinese Medicine Prescriptions from Symptoms with an End-to-End Approach
Traditional Chinese Medicine (TCM) is an influential form of medical treatment in China and surrounding areas. In this paper, we propose a TCM prescription generation task that aims to automatically generate a herbal medicine prescription based on textual symptom descriptions. Sequence-tosequence (seq2seq) model has be...
{ "paragraphs": [ [ "Traditional Chinese Medicine (TCM) is one of the most important forms of medical treatment in China and the surrounding areas. TCM has accumulated large quantities of documentation and therapy records in the long history of development. Prescriptions consisting of herbal medication are ...
{ "answers": [ { "annotation_id": [ "19fe7a6492b6ef59d3db2c54da84da629ce7faf4" ], "answer": [ { "evidence": [ "In the TCM prescription generation task, the textual symptom descriptions can be seen as the question and the aim of the task is to produce a set o...
{ "caption": [ "Table 1: An example of a TCM symptom-prescription pair. As we are mainly concerned with the composition of the prescription, we only provide the herbs in the prescription.", "Figure 1: An illustration of our model. The model is built on the basis of seq2seq model with attention mechanism. We u...
[ "Why did they think this was a good idea?" ]
[ [ "1801.09030-Introduction-1" ] ]
[ "They think it will help human TCM practitioners make prescriptions." ]
278
1804.03396
QA4IE: A Question Answering based Framework for Information Extraction
Information Extraction (IE) refers to automatically extracting structured relation tuples from unstructured texts. Common IE solutions, including Relation Extraction (RE) and open IE systems, can hardly handle cross-sentence tuples, and are severely restricted by limited relation types as well as informal relation spec...
{ "paragraphs": [ [ "Information Extraction (IE), which refers to extracting structured information (i.e., relation tuples) from unstructured text, is the key problem in making use of large-scale texts. High quality extracted relation tuples can be used in various downstream applications such as Knowledge B...
{ "answers": [ { "annotation_id": [ "5370c482a9e9c424d28b8ecadac5f0bad4cc0b9e" ], "answer": [ { "evidence": [ "The input of our model are the words in the input text $x[1], ... , x[n]$ and query $q[1], ... , q[n]$ . We concatenate pre-trained word embeddings...
{ "caption": [ "Fig. 1. An overview of our QA4IE Framework.", "Table 1. Detailed Statistics of QA4IE Benchmark.", "Table 2. Comparison between existing IE benchmarks and QA benchmarks. The first two are IE benchmarks and the rest four are QA benchmarks.", "Fig. 2. An overview of our QA model.", "T...
[ "What QA models were used?" ]
[ [ "1804.03396-Question Answering Model-13", "1804.03396-Question Answering Model-9", "1804.03396-Question Answering Model-6", "1804.03396-Question Answering Model-7", "1804.03396-Question Answering Model-3", "1804.03396-Question Answering Model-1", "1804.03396-Question Answering Model-4"...
[ "A pointer network decodes the answer from a bidirectional LSTM with attention flow layer and self-matching layer, whose inputs come from word and character embeddings of the query and input text fed through a highway layer." ]
280
1707.03764
N-GrAM: New Groningen Author-profiling Model
We describe our participation in the PAN 2017 shared task on Author Profiling, identifying authors' gender and language variety for English, Spanish, Arabic and Portuguese. We describe both the final, submitted system, and a series of negative results. Our aim was to create a single model for both gender and language, ...
{ "paragraphs": [ [ "With the rise of social media, more and more people acquire some kind of on-line presence or persona, mostly made up of images and text. This means that these people can be considered authors, and thus that we can profile them as such. Profiling authors, that is, inferring personal char...
{ "answers": [ { "annotation_id": [ "ca1cbe32990697dc4b2c440c07fa82bfeee4c346" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 8. Results (accuracy) on the test set for variety, gender and their joint prediction.", "For the final evaluation...
{ "caption": [ "Table 1. Results (accuracy) for the 5-fold cross-validation", "Table 2. A list of values over which we performed the grid search.", "Figure 1. Decision Tree output", "Table 4. Results (accuracy) on the English data for Gender and Variety with and without part of speech tags.", "Tab...
[ "How do their results compare against other competitors in the PAN 2017 shared task on Author Profiling?", "On which task does do model do worst?", "On which task does do model do best?" ]
[ [ "1707.03764-Results on Test Data-0", "1707.03764-8-Table8-1.png" ], [ "1707.03764-Results on Test Data-2" ], [ "1707.03764-Results on Test Data-2" ] ]
[ "They achieved best result in the PAN 2017 shared task with accuracy for Variety prediction task 0.0013 more than the 2nd best baseline, accuracy for Gender prediction task 0.0029 more than 2nd best baseline and accuracy for Joint prediction task 0.0101 more than the 2nd best baseline", "Gender prediction task", ...
282
1911.03842
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Models often easily learn biases present in the training data, and their predictions directly reflect this bias. We analyze the presence of gender bias in dialogue and examine the subsequent effect on generative chitchat dialogue models. Based on this analysis, we propose a combination of three techniques to mitigate b...
{ "paragraphs": [ [ "Since machine learning algorithms learn to model patterns present in training datasets, what they learn is affected by data quality. Analysis has found that model predictions directly reflect the biases found in training datasets, such as image classifiers learning to associate ethnicit...
{ "answers": [ { "annotation_id": [ "1adf5025419a86a5a9d6dfa3c94f2b10887ba8dc" ], "answer": [ { "evidence": [ "Each of the methods we explore improve in % gendered words, % male bias, and F1 over the baseline Transformer generation model, but we find combini...
{ "caption": [ "Table 1: Character persona examples from the LIGHT dataset. While there are relatively few examples of femalegendered personas, many of the existing ones exhibit bias. None of these personas were flagged by annotators during a review for offensive content.", "Table 2: An example dialogue from ...
[ "How does counterfactual data augmentation aim to tackle bias?", "In the targeted data collection approach, what type of data is targetted?" ]
[ [ "1911.03842-Methodology: Mitigating Bias in Generative Dialogue ::: Counterfactual Data Augmentation-0" ], [ "1911.03842-Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection ::: Gender-swapping Existing Personas-0" ] ]
[ "The training dataset is augmented by swapping all gendered words by their other gender counterparts", "Gendered characters in the dataset" ]
285
1707.02377
Efficient Vector Representation for Documents through Corruption
We present an efficient document representation learning framework, Document Vector through Corruption (Doc2VecC). Doc2VecC represents each document as a simple average of word embeddings. It ensures a representation generated as such captures the semantic meanings of the document during learning. A corruption model is...
{ "paragraphs": [ [ "Text understanding starts with the challenge of finding machine-understandable representation that captures the semantics of texts. Bag-of-words (BoW) and its N-gram extensions are arguably the most commonly used document representations. Despite its simplicity, BoW works surprisingly w...
{ "answers": [ { "annotation_id": [ "6db29a269f42efdb89beabbd9c34bc64102f33af" ], "answer": [ { "evidence": [ "We compare against the following document representation baselines: bag-of-words (BoW); Denoising Autoencoders (DEA) BIBREF14 , a representation le...
{ "caption": [ "Figure 1: A new framework for learning document vectors.", "Table 1: Classification error of a linear classifier trained on various document representations on the Imdb dataset.", "Table 2: Learning time and representation generation time required by different representation learning algor...
[ "How do they determine which words are informative?" ]
[ [ "1707.02377-Sentiment analysis-4" ] ]
[ "Informative are those that will not be suppressed by regularization performed." ]
286
1701.06538
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice...
{ "paragraphs": [ [ "Exploiting scale in both training data and model size has been central to the success of deep learning. When datasets are sufficiently large, increasing the capacity (number of parameters) of neural networks can give much better prediction accuracy. This has been shown in domains such a...
{ "answers": [ { "annotation_id": [ "c24cfd0839faf733f7671147bea2e508dc3f0869" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": ...
{ "caption": [ "Figure 1: A Mixture of Experts (MoE) layer embedded within a recurrent language model. In this case, the sparse gating function selects two experts to perform computations. Their outputs are modulated by the outputs of the gating network.", "Figure 2: Model comparison on 1-Billion-Word Languag...
[ "What improvement does the MOE model make over the SOTA on language modelling?" ]
[ [ "1701.06538-1 Billion Word Language Modeling Benchmark - Experimental Details-11", "1701.06538-1 Billion Word Language Modeling Benchmark-5", "1701.06538-7-Table1-1.png" ] ]
[ "Perpexity is improved from 34.7 to 28.0." ]
288
1905.10810
Evaluation of basic modules for isolated spelling error correction in Polish texts
Spelling error correction is an important problem in natural language processing, as a prerequisite for good performance in downstream tasks as well as an important feature in user-facing applications. For texts in Polish language, there exist works on specific error correction solutions, often developed for dealing wi...
{ "paragraphs": [ [ "Spelling error correction is a fundamental NLP task. Most language processing applications benefit greatly from being provided clean texts for their best performance. Human users of computers also often expect competent help in making spelling of their texts correct.", "Because of...
{ "answers": [ { "annotation_id": [ "91f989a06bf11f012960b7cdad07de1c33d7d969" ], "answer": [ { "evidence": [ "The experimental results are presented in Table TABREF4 . Diacritic swapping showed a remarkably poor performance, despite promising mentions in ex...
{ "caption": [ "Table 1: Test results for all the methods used. The loss measure is cross-entropy.", "Table 2: Discovered optimal weights for summing layers of ELMo embedding for initializing an error-correcting LSTM. The layers are numbered from the one that directly processes character and word input to the...
[ "What is the difference in performance between the interpretable system (e.g. vectors and cosine distance) and LSTM with ELMo system?" ]
[ [ "1905.10810-3-Table1-1.png", "1905.10810-Results-0" ] ]
[ "Accuracy of best interpretible system was 0.3945 while accuracy of LSTM-ELMo net was 0.6818." ]
289
1910.07481
Using Whole Document Context in Neural Machine Translation
In Machine Translation, considering the document as a whole can help to resolve ambiguities and inconsistencies. In this paper, we propose a simple yet promising approach to add contextual information in Neural Machine Translation. We present a method to add source context that capture the whole document with accurate ...
{ "paragraphs": [ [ "Neural machine translation (NMT) has grown rapidly in the past years BIBREF0, BIBREF1. It usually takes the form of an encoder-decoder neural network architecture in which source sentences are summarized into a vector representation by the encoder and are then decoded into target senten...
{ "answers": [ { "annotation_id": [ "408e8c7aa8047ab454e61244dddecc43adcd7511" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 5: Results obtained for the English-French and French-English translation tasks, scored on three test sets using BLEU and TER...
{ "caption": [ "Table 1: Example of augmented parallel data used to train theDocumentmodel. The source corpus contains document tags while the target corpus remains unchanged.", "Table 2: Detail of training and evaluation sets for the English-German pair, showing the number of lines, words in English (EN) and...
[ "Which language-pair had the better performance?" ]
[ [ "1910.07481-4-Table5-1.png" ] ]
[ "French-English" ]
291
2001.05493
A Unified System for Aggression Identification in English Code-Mixed and Uni-Lingual Texts
Wide usage of social media platforms has increased the risk of aggression, which results in mental stress and affects the lives of people negatively like psychological agony, fighting behavior, and disrespect to others. Majority of such conversations contains code-mixed languages[28]. Additionally, the way used to expr...
{ "paragraphs": [ [ "The exponential increase of interactions on the various social media platforms has generated the huge amount of data on social media platforms like Facebook and Twitter, etc. These interactions resulted not only positive effect but also negative effect over billions of people owing to t...
{ "answers": [ { "annotation_id": [ "bf48a718d94133ed24e7ea54cb050ffaa688cf7b" ], "answer": [ { "evidence": [ "In future, we will explore other methods to increase the understanding of deep learning models on group targeted text, although the categories are ...
{ "caption": [ "Figure 1: Block diagram of the proposed system", "Table 1: Details of NLP features", "Figure 2: DPCNN", "Figure 3: DRNN", "Figure 4: Pooled BiLSTM", "Table 2: TRAC 2018, Details of English Code-Mixed Dataset", "Table 6: Results on Kaggle Test Dataset", "Figure 5: Confus...
[ "Which psycholinguistic and basic linguistic features are used?", "How have the differences in communication styles between Twitter and Facebook increase the complexity of the problem?", "What data/studies do the authors provide to support the assertion that the majority of aggressive conversations contain code...
[ [ "2001.05493-Introduction-12", "2001.05493-Methodology ::: NLP Features-0", "2001.05493-4-Table1-1.png" ], [ "2001.05493-Introduction-10" ], [ "2001.05493-Introduction-6" ] ]
[ "Emotion Sensor Feature, Part of Speech, Punctuation, Sentiment Analysis, Empath, TF-IDF Emoticon features", "Systems do not perform well both in Facebook and Twitter texts", "None" ]
293
1606.08140
STransE: a novel embedding model of entities and relationships in knowledge bases
Knowledge bases of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge bases are typically incomplete, it is useful to be able to perform link prediction or knowledge base completion, i.e., predict whether a relation...
{ "paragraphs": [ [ "Knowledge bases (KBs), such as WordNet BIBREF0 , YAGO BIBREF1 , Freebase BIBREF2 and DBpedia BIBREF3 , represent relationships between entities as triples $(\\mathrm {head\\ entity, relation, tail\\ entity})$ . Even very large knowledge bases are still far from complete BIBREF4 , BIBREF...
{ "answers": [ { "annotation_id": [ "1c1dfad3a62e0b5a77ea7279312f43e2b0f155c0" ], "answer": [ { "evidence": [ "Let $\\mathcal {E}$ denote the set of entities and $\\mathcal {R}$ the set of relation types. For each triple $(h, r, t)$ , where $h, t \\in \\math...
{ "caption": [ "Table 1: The score functions fr(h, t) and the optimization methods (Opt.) of several prominent embedding models for KB completion. In all of these the entities h and t are represented by vectors h and t ∈ Rk respectively.", "Table 2: Statistics of the experimental datasets used in this study (...
[ "What datasets are used to evaluate the model?" ]
[ [ "1606.08140-Introduction-5" ] ]
[ "WN18, FB15k" ]
295
1901.02257
Multi-Perspective Fusion Network for Commonsense Reading Comprehension
Commonsense Reading Comprehension (CRC) is a significantly challenging task, aiming at choosing the right answer for the question referring to a narrative passage, which may require commonsense knowledge inference. Most of the existing approaches only fuse the interaction information of choice, passage, and question in...
{ "paragraphs": [ [ "Content: Task Definition", "1. Describe the task of commonsense reading comprehension(CRC) belongs to which filed and how important it is.", "2. Define the task of CRC", "3. Data feature of CRC", "4. Figure 1 shows an example.", "Machine Reading Comprehensi...
{ "answers": [ { "annotation_id": [ "1cbbd80eee1c4870bf7827e2e3bb278186731b7d" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 2: Experimental Results of Models" ], "extractive_spans": [], "free_form_answer": "SLQA, Rusalk...
{ "caption": [ "Fig. 1: Architecture of our MPFN Model.", "Table 2: Experimental Results of Models", "Table 3: Test Accuracy of Multi-Perspective", "Fig. 2: Influence of Word-level Interaction.", "Fig. 3: Visualization of Fusions" ], "file": [ "4-Figure1-1.png", "7-Table2-1.png", "...
[ "What baseline models do they compare against?" ]
[ [ "1901.02257-7-Table2-1.png" ] ]
[ "SLQA, Rusalka, HMA Model (single), TriAN (single), jiangnan (ensemble), MITRE (ensemble), TriAN (ensemble), HMA Model (ensemble)" ]
297
1710.01507
Identifying Clickbait: A Multi-Strategy Approach Using Neural Networks
Online media outlets, in a bid to expand their reach and subsequently increase revenue through ad monetisation, have begun adopting clickbait techniques to lure readers to click on articles. The article fails to fulfill the promise made by the headline. Traditional methods for clickbait detection have relied heavily on...
{ "paragraphs": [ [ "The Internet provides instant access to a wide variety of online content, news included. Formerly, users had static preferences, gravitating towards their trusted sources, incurring an unwavering sense of loyalty. The same cannot be said for current trends since users are likely to go w...
{ "answers": [ { "annotation_id": [ "1cbfdce25dfdc7c55ded63bbade870a96b66c848" ], "answer": [ { "evidence": [ "One common point in all the approaches yet has been the use of only textual features available in the dataset. Our model not only incorporates text...
{ "caption": [ "Figure 1: Model Architecture", "Table 1: Comparison of our model with existing methods" ], "file": [ "3-Figure1-1.png", "4-Table1-1.png" ] }
[ "What are the differences with previous applications of neural networks for this task?" ]
[ [ "1710.01507-Related Work-2" ] ]
[ "This approach considers related images" ]
298
2002.02492
Consistency of a Recurrent Language Model With Respect to Incomplete Decoding
Despite strong performance on a variety of tasks, neural sequence models trained with maximum likelihood have been shown to exhibit issues such as length bias and degenerate repetition. We study the related issue of receiving infinite-length sequences from a recurrent language model when using common decoding algorithm...
{ "paragraphs": [ [ "Neural sequence models trained with maximum likelihood estimation (MLE) have become a standard approach to modeling sequences in a variety of natural language applications such as machine translation BIBREF0, dialogue modeling BIBREF1, and language modeling BIBREF2. Despite this success...
{ "answers": [ { "annotation_id": [ "cfd1e076d4a9b5356e4b4202f216399e66547e50" ], "answer": [ { "evidence": [ "Table TABREF44 shows that consistent nucleus and top-$k$ sampling (§SECREF28) resulted in only terminating sequences, except for a few cases that w...
{ "caption": [ "Table 1. Non-termination ratio (rL (%)) of decoded sequences using ancestral sampling and incomplete decoding methods.", "Table 2. Non-termination ratio (rL (%)) of decoded sequences using consistent sampling methods.", "Table 3. Example continuations using nucleus and consistent nucleus (...
[ "How much improvement is gained from the proposed approaches?", "Is infinite-length sequence generation a result of training with maximum likelihood?" ]
[ [ "2002.02492-7-Table2-1.png", "2002.02492-7-Table1-1.png", "2002.02492-Empirical Validation ::: Consistency of the Proposed Methods ::: Consistent sampling.-0", "2002.02492-Empirical Validation ::: Consistency of the Proposed Methods ::: Self-terminating RNN.-1" ], [ "2002.02492-Conclusion-...
[ "It eliminates non-termination in some models fixing for some models up to 6% of non-termination ratio.", "There are is a strong conjecture that it might be the reason but it is not proven." ]
299
2001.06354
Modality-Balanced Models for Visual Dialogue
The Visual Dialog task requires a model to exploit both image and conversational context information to generate the next response to the dialogue. However, via manual analysis, we find that a large number of conversational questions can be answered by only looking at the image without any access to the context history...
{ "paragraphs": [ [ "When we pursue conversations, context is important to keep the topic consistent or to answer questions which are asked by others, since most new utterances are made conditioned on related mentions or topic clues in the previous utterances in the conversation history. However, conversati...
{ "answers": [ { "annotation_id": [ "2c59967f5430dfaddbed7aeaa01ebf35e6afc767" ], "answer": [ { "evidence": [ "For evaluation, the Visual Dialog task employs four metrics. NDCG is the primary metric of the Visual Dialog Challenge which considers multiple sim...
{ "caption": [ "Figure 1: Examples of Visual Dialog Task. Some questions only need an image to be answered (Q8-A8 and Q3-A3 pairs in blue from each example, respectively), but others need conversation history (Q9-A9 and Q4-A4 pairs in orange from each example, respectively).", "Figure 2: The architecture of t...
[ "How big is dataset for this challenge?" ]
[ [ "2001.06354-Experimental Setup ::: Dataset-0" ] ]
[ "133,287 images" ]
300
1910.08210
RTFM: Generalising to Novel Environment Dynamics via Reading
Obtaining policies that can generalise to new environments in reinforcement learning is challenging. In this work, we demonstrate that language understanding via a reading policy learner is a promising vehicle for generalisation to new environments. We propose a grounded policy learning problem, Read to Fight Monsters ...
{ "paragraphs": [ [ "Reinforcement learning (RL) has been successful in a variety of areas such as continuous control BIBREF0, dialogue systems BIBREF1, and game-playing BIBREF2. However, RL adoption in real-world problems is limited due to poor sample efficiency and failure to generalise to environments ev...
{ "answers": [ { "annotation_id": [ "e1baf02533ffcfb9d0e56ff82dd5c96ec07e8198" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": ...
{ "caption": [ "Figure 1: RTFM requires jointly reasoning over the goal, a document describing environment dynamics, and environment observations. This figure shows key snapshots from a trained policy on one randomly sampled environment. Frame 1 shows the initial world. In 4, the agent approaches “fanatical sword...
[ "How better is performance of proposed model compared to baselines?" ]
[ [ "1910.08210-6-Table1-1.png" ] ]
[ "Proposed model achive 66+-22 win rate, baseline CNN 13+-1 and baseline FiLM 32+-3 ." ]
302
2001.05672
AandP: Utilizing Prolog for converting between active sentence and passive sentence with three-steps conversion
I introduce a simple but efficient method to solve one of the critical aspects of English grammar which is the relationship between active sentence and passive sentence. In fact, an active sentence and its corresponding passive sentence express the same meaning, but their structure is different. I utilized Prolog [4] a...
{ "paragraphs": [ [ "Language plays a vital role in the human life. A language is a structured system of communication BIBREF2. There are various language systems in the world with the estimated number being between 5,000 and 7,000 BIBREF3. Natural Language Processing (NLP) which we commonly hear is a subfi...
{ "answers": [ { "annotation_id": [ "1e0fa4309a720cb4870ddfe8e6f05744cf596f7c" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": ...
{ "caption": [ "Figure 1: A variety of stuff in English grammar", "Figure 2: Basic rules for converting an active sentence to passive sentence", "Figure 3: The compact version of the representation of the active sentence", "Figure 4: The representation of the passive sentence", "Figure 5: The scen...
[ "What DCGs are used?" ]
[ [ "2001.05672-Results-1" ] ]
[ "Author's own DCG rules are defined from scratch." ]
305
1911.02711
Exploring Hierarchical Interaction Between Review and Summary for Better Sentiment Analysis
Sentiment analysis provides a useful overview of customer review contents. Many review websites allow a user to enter a summary in addition to a full review. It has been shown that jointly predicting the review summary and the sentiment rating benefits both tasks. However, these methods consider the integration of revi...
{ "paragraphs": [ [ "Sentiment analysis BIBREF0, BIBREF1 is a fundamental task in natural language processing. In particular, sentiment analysis of user reviews has wide applicationsBIBREF2, BIBREF3, BIBREF4, BIBREF5. In many review websites such as Amazon and IMDb, the user is allowed to give a summary in ...
{ "answers": [ { "annotation_id": [ "1e38f91f9d12f8b3d01c46b7b00f139d81e5df0e" ], "answer": [ { "evidence": [ "To address this issue, we further investigate a joint encoder for review and summary, which is demonstrated in Figure FIGREF4. The model works by j...
{ "caption": [ "Figure 2: Three model structures for incorporating summary into sentiment classification", "Figure 3: Architecture of proposed model (Xw = xw1 , x w 2 , ..., x w n : review; X s = xs1, x s 2, ..., x s m: summary).", "Table 1: Data statistics. Size: number of samples, #Review: the average l...
[ "What is the performance difference of using a generated summary vs. a user-written one?" ]
[ [ "1911.02711-7-Table4-1.png", "1911.02711-7-Table5-1.png", "1911.02711-Experiments ::: Datasets-0", "1911.02711-Experiments ::: Results-0" ] ]
[ "2.7 accuracy points" ]
307
2001.11381
Generaci\'on autom\'atica de frases literarias en espa\~nol
In this work we present a state of the art in the area of Computational Creativity (CC). In particular, we address the automatic generation of literary sentences in Spanish. We propose three models of text generation based mainly on statistical algorithms and shallow parsing analysis. We also present some rather encour...
{ "paragraphs": [ [ "Los investigadores en Procesamiento de Lenguaje Natural (PLN) durante mucho tiempo han utilizado corpus constituidos por documentos enciclopédicos (notablemente Wikipedia), periodísticos (periódicos o revistas) o especializados (documentos legales, científicos o técnicos) para el desarr...
{ "answers": [ { "annotation_id": [ "1e3d5d6820e7f433376363f1e349bd66a4aa7b53" ], "answer": [ { "evidence": [ "Los resultados de la evaluación se presentan en la Tabla TABREF42, en la forma de promedios normalizados entre [0,1] y de su desviación estándar $\...
{ "caption": [ "Table 1: Corpus 5KL compuesto de 4 839 obras literarias.", "Table 2: Corpus 8KF compuesto de 7 679 frases literarias.", "Figure 1: Arquitectura general de los modelos.", "Figure 2: Modelo generativo estocástico (Markov) que produce una estructura gramatical vacía EGV.", "Figure 3: ...
[ "What evaluation metrics did they look at?" ]
[ [ "2001.11381-Experimentos y resultados ::: Resultados-5" ] ]
[ "accuracy with standard deviation" ]
308
1909.13362
Language-Agnostic Syllabification with Neural Sequence Labeling
The identification of syllables within phonetic sequences is known as syllabification. This task is thought to play an important role in natural language understanding, speech production, and the development of speech recognition systems. The concept of the syllable is cross-linguistic, though formal definitions are ra...
{ "paragraphs": [ [ "Words can be considered compositions of syllables, which in turn are compositions of phones. Phones are units of sound producible by the human vocal apparatus. Syllables play an important role in prosody and are influential components of natural language understanding, speech production...
{ "answers": [ { "annotation_id": [ "db21ebb540520b9df2e5841a9b8f9947372f7cff" ], "answer": [ { "evidence": [ "FLOAT SELECTED: TABLE I DATASETS AND LANGUAGES USED FOR EVALUATION. AVERAGE PHONE AND SYLLABLE COUNTS ARE PER WORD.", "To produce a lan...
{ "caption": [ "Fig. 1. Network diagram detailing the concatenation of the forward and backward LSTMs with the convolutional component.", "Fig. 2. Diagram of the LSTM cell. ci and hi are the cell states and hidden states that propagate through time, respectively. xi is the input at time i and is concatenated ...
[ "What are the datasets used for the task?", "What is the accuracy of the model for the six languages tested?", "Which models achieve state-of-the-art performances?" ]
[ [ "1909.13362-Materials ::: Datasets-1", "1909.13362-5-TableI-1.png", "1909.13362-Materials ::: Datasets-0" ], [ "1909.13362-6-TableIII-1.png", "1909.13362-Experiments ::: Results-0" ], [ "1909.13362-5-TableII-1.png", "1909.13362-Materials ::: Datasets-2" ] ]
[ "Datasets used are Celex (English, Dutch), Festival (Italian), OpenLexuque (French), IIT-Guwahati (Manipuri), E-Hitz (Basque)", "Authors report their best models have following accuracy: English CELEX (98.5%), Dutch CELEX (99.47%), Festival (99.990%), OpenLexique (100%), IIT-Guwahat (95.4%), E-Hitz (99.83%)", "...
309
1907.08937
Quantifying Similarity between Relations with Fact Distribution
We introduce a conceptually simple and effective method to quantify the similarity between relations in knowledge bases. Specifically, our approach is based on the divergence between the conditional probability distributions over entity pairs. In this paper, these distributions are parameterized by a very simple neural...
{ "paragraphs": [ [ "Author contributions: Hao Zhu designed the research; Weize Chen prepared the data, and organized data annotation; Hao Zhu and Xu Han designed the experiments; Weize Chen performed the experiments; Hao Zhu, Weize Chen and Xu Han wrote the paper; Zhiyuan Liu and Maosong Sun proofread the ...
{ "answers": [ { "annotation_id": [ "95bdaf3d6a7bda0b316622f894922a9e97014da0" ], "answer": [ { "evidence": [ "In this section, we consider two kinds of relational classification tasks: (1) relation prediction and (2) relation extraction. Relation prediction...
{ "caption": [ "Table 1: An illustration of the errors made by relation extraction models. The sentence contains obvious patterns indicating the two persons are siblings, but the model predicts it as parents. We introduce an approach to measure the similarity between relations. Our result shows “siblings” is the ...
[ "Which competitive relational classification models do they test?", "How do they gather human judgements for similarity between relations?" ]
[ [ "1907.08937-Error Analysis for Relational Classification-0", "1907.08937-Relation Extraction-0" ], [ "1907.08937-Human Judgments-0" ] ]
[ "For relation prediction they test TransE and for relation extraction they test position aware neural sequence model", "By assessing similarity of 360 pairs of relations from a subset of Wikidata using an integer similarity score from 0 to 4" ]
314
1803.02839
The emergent algebraic structure of RNNs and embeddings in NLP
We examine the algebraic and geometric properties of a uni-directional GRU and word embeddings trained end-to-end on a text classification task. A hyperparameter search over word embedding dimension, GRU hidden dimension, and a linear combination of the GRU outputs is performed. We conclude that words naturally embed t...
{ "paragraphs": [ [ "Tremendous advances in natural language processing (NLP) have been enabled by novel deep neural network architectures and word embeddings. Historically, convolutional neural network (CNN) BIBREF0 , BIBREF1 and recurrent neural network (RNN) BIBREF2 , BIBREF3 topologies have competed to ...
{ "answers": [ { "annotation_id": [ "bf774ae56bda98520bf2f391d2cdac452d7de496" ], "answer": [ { "evidence": [ "We trained word embeddings and a uni-directional GRU connected to a dense layer end-to-end for text classification on a set of scraped tweets using...
{ "caption": [ "Figure 1: The simple network trained as a classifier: GRU→Dense→Linear→Softmax. There are 10 nonlinear neurons dedicated to each of the final 10 energies that are combined through a linear layer before softmaxing. This is to capitalize on the universal approximation theorem’s implication that neur...
[ "What text classification task is considered?" ]
[ [ "1803.02839-Data and methods-0" ] ]
[ "To classify a text as belonging to one of the ten possible classes." ]
315
2001.07263
Single headed attention based sequence-to-sequence model for state-of-the-art results on Switchboard-300
It is generally believed that direct sequence-to-sequence (seq2seq) speech recognition models are competitive with hybrid models only when a large amount of data, at least a thousand hours, is available for training. In this paper, we show that state-of-the-art recognition performance can be achieved on the Switchboard...
{ "paragraphs": [ [ "Powerful neural networks have enabled the use of “end-to-end” speech recognition models that directly map a sequence of acoustic features to a sequence of words without conditional independence assumptions. Typical examples are attention based encoder-decoder BIBREF0 and recurrent neura...
{ "answers": [ { "annotation_id": [ "1fea4fe9abca58bfbaf8fa9d73e27e286350f040" ], "answer": [ { "evidence": [ "This study focuses on Switchboard-300, a standard 300-hour English conversational speech recognition task. Our acoustic and text data preparation f...
{ "caption": [ "Figure 1: (a) Building block of the encoder; (b) attention based decoder network used in the experiments.", "Table 1: Effect of data preparation steps on WER [%] measured on Hub5’00. The second row corresponds to the Kaldi s5c recipe.", "Table 2: Ablation study on the final training recipe...
[ "How much bigger is Switchboard-2000 than Switchboard-300 database?" ]
[ [ "2001.07263-Experimental setup-0", "2001.07263-Experimental results ::: Experiments on Switchboard-2000-0" ] ]
[ "Switchboard-2000 contains 1700 more hours of speech data." ]
316
1905.11037
Harry Potter and the Action Prediction Challenge from Natural Language
We explore the challenge of action prediction from textual descriptions of scenes, a testbed to approximate whether text inference can be used to predict upcoming actions. As a case of study, we consider the world of the Harry Potter fantasy novels and inferring what spell will be cast next given a fragment of a story....
{ "paragraphs": [ [ "Natural language processing (nlp) has achieved significant advances in reading comprehension tasks BIBREF0 , BIBREF1 . These are partially due to embedding methods BIBREF2 , BIBREF3 and neural networks BIBREF4 , BIBREF5 , BIBREF6 , but also to the availability of new resources and chall...
{ "answers": [ { "annotation_id": [ "622d720ec8d7a8da26e700b9bb84a0cbe2c97629" ], "answer": [ { "evidence": [ "Although not done in this work, an alternative (but also natural) way to address the task is as a special case of language modelling, where the out...
{ "caption": [ "Table 2: Corpus statistics: s is the length of the snippet.", "Table 4: Averaged recall at k over 5 runs.", "Table 3: Macro and weighted F-scores over 5 runs.", "Table 5: Performance on frequent (those that occur above the average) and infrequent actions.", "Table 6: Label distribu...
[ "Why do they think this task is hard? What is the baseline performance?" ]
[ [ "1905.11037-Models-1" ] ]
[ "1. there may be situations where more than one action is reasonable, and also because writers tell a story playing with elements such as surprise or uncertainty.\n2. Macro F1 = 14.6 (MLR, length 96 snippet)\nWeighted F1 = 31.1 (LSTM, length 128 snippet)" ]
319
1710.10609
Finding Dominant User Utterances And System Responses in Conversations
There are several dialog frameworks which allow manual specification of intents and rule based dialog flow. The rule based framework provides good control to dialog designers at the expense of being more time consuming and laborious. The job of a dialog designer can be reduced if we could identify pairs of user intents...
{ "paragraphs": [ [ "There are several existing works that focus on modelling conversation using prior human to human conversational data BIBREF0 , BIBREF1 , BIBREF2 . BIBREF3 models the conversation from pairs of consecutive tweets. Deep learning based approaches have also been used to model the dialog in ...
{ "answers": [ { "annotation_id": [ "b5256d39f74e32a23745916ba19ada536993c6ea" ], "answer": [ { "evidence": [ "In order to identify frequent user intents, one can use existing clustering algorithms to group together all the utterances from the users. Here ea...
{ "caption": [ "Figure 1: Some intents and dialog flow", "Figure 2: Some sample conversations and the obtained clusters", "Figure 3: Sample clusters with matching", "Table 1: Performance of SimCluster versus K-means clustering on synthetic dataset", "Figure 4: Improvement in ARI figures achieved b...
[ "How do they generate the synthetic dataset?" ]
[ [ "1710.10609-Experiments on Synthetic Dataset-2", "1710.10609-Experiments on Synthetic Dataset-0" ] ]
[ "using generative process" ]
320
1906.03538
Seeing Things from a Different Angle: Discovering Diverse Perspectives about Claims
One key consequence of the information revolution is a significant increase and a contamination of our information supply. The practice of fact checking won't suffice to eliminate the biases in text data we observe, as the degree of factuality alone does not determine whether biases exist in the spectrum of opinions vi...
{ "paragraphs": [ [ "Understanding most nontrivial claims requires insights from various perspectives. Today, we make use of search engines or recommendation systems to retrieve information relevant to a claim, but this process carries multiple forms of bias. In particular, they are optimized relative to th...
{ "answers": [ { "annotation_id": [ "ab33b2755926e5837deb5730e0f745a8112ccebc" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 2: A summary of PERSPECTRUM statistics", "We now provide a brief summary of [wave]390P[wave]415e[wave]440r[wave]4...
{ "caption": [ "Figure 1: Given a claim, a hypothetical system is expected to discover various perspectives that are substantiated with evidence and their stance with respect to the claim.", "Figure 2: Depiction of a few claims, their perspectives and evidences from PERSPECTRUM. The supporting", "Table 1:...
[ "What is the average length of the claims?", "What debate topics are included in the dataset?" ]
[ [ "1906.03538-5-Table2-1.png", "1906.03538-Statistics on the dataset-0" ], [ "1906.03538-Statistics on the dataset-1", "1906.03538-6-Figure3-1.png" ] ]
[ "Average claim length is 8.9 tokens.", "Ethics, Gender, Human rights, Sports, Freedom of Speech, Society, Religion, Philosophy, Health, Culture, World, Politics, Environment, Education, Digital Freedom, Economy, Science and Law" ]
322
1803.09230
Pay More Attention - Neural Architectures for Question-Answering
Machine comprehension is a representative task of natural language understanding. Typically, we are given context paragraph and the objective is to answer a question that depends on the context. Such a problem requires to model the complex interactions between the context paragraph and the question. Lately, attention m...
{ "paragraphs": [ [ "Enabling machines to understand natural language is one of the key challenges to achieve artificially intelligent systems. Asking machines questions and getting a meaningful answer adds value to us since it automatizes knowledge acquisition efforts drastically. Apple's Siri and Amazon's...
{ "answers": [ { "annotation_id": [ "21caa6c0b7e0fb8d6dcb8cf48fc6829fbf2201e7" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Effect of Character Embedding" ], "extractive_spans": [], "free_form_answer": "In terms of F...
{ "caption": [ "Figure 1: Double Cross Attention Model", "Figure 2: Exploratory Data Analysis", "Table 1: Effect of Character Embedding", "Figure 3: Tensorboard Visualizations", "Table 4: Hyperparameter Tuning for DCA Model" ], "file": [ "4-Figure1-1.png", "5-Figure2-1.png", "5-Tab...
[ "By how much, the proposed method improves BiDAF and DCN on SQuAD dataset?" ]
[ [ "1803.09230-5-Table1-1.png" ] ]
[ "In terms of F1 score, the Hybrid approach improved by 23.47% and 1.39% on BiDAF and DCN respectively. The DCA approach improved by 23.2% and 1.12% on BiDAF and DCN respectively." ]
323
1709.05404
Creating and Characterizing a Diverse Corpus of Sarcasm in Dialogue
The use of irony and sarcasm in social media allows us to study them at scale for the first time. However, their diversity has made it difficult to construct a high-quality corpus of sarcasm in dialogue. Here, we describe the process of creating a large- scale, highly-diverse corpus of online debate forums dialogue, an...
{ "paragraphs": [ [ "Irony and sarcasm in dialogue constitute a highly creative use of language signaled by a large range of situational, semantic, pragmatic and lexical cues. Previous work draws attention to the use of both hyperbole and rhetorical questions in conversation as distinct types of lexico-synt...
{ "answers": [ { "annotation_id": [ "21cace078e31fa2cc1349f5fd5edcd08a17822ef" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": ...
{ "caption": [ "Table 1: Examples of different types of SARCASTIC (S) and NOT-SARCASTIC (NS) Posts", "Table 2: Total number of posts in each subcorpus (each with a 50% split of SARCASTIC and NOTSARCASTIC posts)", "Figure 1: Mechanical Turk Task Layout", "Table 3: Annotation Counts for a Subset of Cues...
[ "What are the linguistic differences between each class?" ]
[ [ "1709.05404-Linguistic Analysis-5", "1709.05404-Linguistic Analysis-6", "1709.05404-Linguistic Analysis-2", "1709.05404-Linguistic Analysis-3", "1709.05404-Linguistic Analysis-1" ] ]
[ "Each class has different patterns in adjectives, adverbs and verbs for sarcastic and non-sarcastic classes" ]
324
2003.05377
Brazilian Lyrics-Based Music Genre Classification Using a BLSTM Network
Organize songs, albums, and artists in groups with shared similarity could be done with the help of genre labels. In this paper, we present a novel approach for automatic classifying musical genre in Brazilian music using only the song lyrics. This kind of classification remains a challenge in the field of Natural Lang...
{ "paragraphs": [ [ "Music is part of the day-to-day life of a huge number of people, and many works try to understand the best way to classify, recommend, and identify similarities between songs. Among the tasks that involve music classification, genre classification has been studied widely in recent years...
{ "answers": [ { "annotation_id": [ "fd4dc678ccb665a4b219e9090c219f7e563ebd51" ], "answer": [ { "evidence": [ "In order to obtain a large number of Brazilian music lyrics, we created a crawler to navigate into the Vagalume website, extracting, for each music...
{ "caption": [ "Figure 1: An example of a Vagalume’s song web page", "Table 1: The number of songs and artists by genre", "Figure 2: The Long Short-Term Memory unit.", "Figure 3: Our BLSTM model architecture", "Table 2: Classification results for each classifier and word embeddings model combinati...
[ "what genres do they songs fall under?" ]
[ [ "2003.05377-Methods ::: Data Acquisition-1", "2003.05377-3-Table1-1.png" ] ]
[ "Gospel, Sertanejo, MPB, Forró, Pagode, Rock, Samba, Pop, Axé, Funk-carioca, Infantil, Velha-guarda, Bossa-nova and Jovem-guarda" ]
325
2001.05467
AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses
Many sequence-to-sequence dialogue models tend to generate safe, uninformative responses. There have been various useful efforts on trying to eliminate them. However, these approaches either improve decoding algorithms during inference, rely on hand-crafted features, or employ complex models. In our work, we build dial...
{ "paragraphs": [ [ "Many modern dialogue generation models use a sequence-to-sequence architecture as their backbone BIBREF0, following its success when applied to Machine Translation (MT) BIBREF1. However, dialogue tasks also have a requirement different from that of MT: the response not only has to be \"...
{ "answers": [ { "annotation_id": [ "58eb36e018db5f54effe1a5c0708afa5e6517db0" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Automatic Evaluation Activity/Entity F1 results for baselines and our 3 models (attn means “with attention”). LSTM, HRED a...
{ "caption": [ "Figure 1: MinAvgOut model: use the dot product of average output distribution of the exponential average and the current batch to evaluate how diverse the current batch is.", "Figure 2: An example of AVGOUT applied to a single token, which readily generalizes to multiple tokens within a respon...
[ "To what other competitive baselines is this approach compared?", "How is human evaluation performed, what was the criteria?", "How much better were results of the proposed models than base LSTM-RNN model?", "Which one of the four proposed models performed best?" ]
[ [ "2001.05467-5-Table1-1.png" ], [ "2001.05467-Experimental Setup ::: Human Evaluation-1" ], [ "2001.05467-5-Table1-1.png" ], [ "2001.05467-Results and Analysis ::: Automatic Evaluation Results-0" ] ]
[ "LSTMs with and without attention, HRED, VHRED with and without attention, MMI and Reranking-RL", "Through Amazon MTurk annotators to determine plausibility and content richness of the response", "on diversity 6.87 and on relevance 4.6 points higher", "the hybrid model MinAvgOut + RL" ]
327
1909.09484
Generative Dialog Policy for Task-oriented Dialog Systems
There is an increasing demand for task-oriented dialogue systems which can assist users in various activities such as booking tickets and restaurant reservations. In order to complete dialogues effectively, dialogue policy plays a key role in task-oriented dialogue systems. As far as we know, the existing task-oriented...
{ "paragraphs": [ [ "Task-oriented dialogue system is an important tool to build personal virtual assistants, which can help users to complete most of the daily tasks by interacting with devices via natural language. It's attracting increasing attention of researchers, and lots of works have been proposed i...
{ "answers": [ { "annotation_id": [ "22373b0432d3562414ad265ca283a3aa073e45c1" ], "answer": [ { "evidence": [ "BPRA: Belief Per-Response Accuracy (BPRA) tests the ability to generate the correct user intents during the dialogue. This metric is used to evalua...
{ "caption": [ "Figure 1: The examples in DSTC2 dataset, our proposed model can hold more information about dialogue policy than the classification models mentioned above. “MA, w/o P” is the model that chooses multiple acts without corresponding parameters during dialogue police modeling, “w/o MA, P” is the model...
[ "How much is proposed model better than baselines in performed experiments?" ]
[ [ "1909.09484-Experiments ::: Experimental Results-3", "1909.09484-Experiments ::: Experimental Results-1", "1909.09484-7-Table2-1.png", "1909.09484-Experiments ::: Experimental Results-2" ] ]
[ "most of the models have similar performance on BPRA: DSTC2 (+0.0015), Maluuba (+0.0729)\nGDP achieves the best performance in APRA: DSTC2 (+0.2893), Maluuba (+0.2896)\nGDP significantly outperforms the baselines on BLEU: DSTC2 (+0.0791), Maluuba (+0.0492)" ]
328
1909.02776
Features in Extractive Supervised Single-document Summarization: Case of Persian News
Text summarization has been one of the most challenging areas of research in NLP. Much effort has been made to overcome this challenge by using either the abstractive or extractive methods. Extractive methods are more popular, due to their simplicity compared with the more elaborate abstractive methods. In extractive a...
{ "paragraphs": [ [ "From the early days of artificial intelligence, automatically summarizing a text was an interesting task for many researchers. Followed by the advance of the World Wide Web and the advent of concepts such as Social networks, Big Data, and Cloud computing among others, text summarization...
{ "answers": [ { "annotation_id": [ "b474cdce67d7756ba614277e31d748158d546c14" ], "answer": [ { "evidence": [ "We used the Pasokh dataset BIBREF42 that contains 100 Persian news documents each of which is associated with 5 summaries. Each summary consists of...
{ "caption": [ "Figure 1: An excerpt of whole feature set. SC and SP under Topical category stand for Science and Sport, respectively.", "Table 1: Quality of the regression model’s predictions on the test set.", "Figure 2: ROUGE Quality of produced summaries in terms of f-measure.", "Figure 3: ROUGE Q...
[ "By how much is precission increased?" ]
[ [ "1909.02776-9-Figure3-1.png" ] ]
[ "ROUGE-1 increases by 0.05, ROUGE-2 by 0.06 and ROUGE-L by 0.09" ]
332
1911.00133
Dreaddit: A Reddit Dataset for Stress Analysis in Social Media
Stress is a nigh-universal human experience, particularly in the online world. While stress can be a motivator, too much stress is associated with many negative health outcomes, making its identification useful across a range of domains. However, existing computational research typically only studies stress in domains ...
{ "paragraphs": [ [ "In our online world, social media users tweet, post, and message an incredible number of times each day, and the interconnected, information-heavy nature of our lives makes stress more prominent and easily observable than ever before. With many platforms such as Twitter, Reddit, and Fac...
{ "answers": [ { "annotation_id": [ "fabde6151d3a3807a6927286d467f749e8e11c41" ], "answer": [ { "evidence": [ "We submit 4,000 segments, sampled equally from each domain and uniformly within domains, to Mechanical Turk to be annotated by at least five Worker...
{ "caption": [ "Figure 1: An example of stress being expressed in social media from our dataset, from a post in r/anxiety (reproduced exactly as found). Some possible expressions of stress are highlighted.", "Table 1: Data Statistics. We include ten total subreddits from five domains in our dataset. Because s...
[ "What labels are in the dataset?" ]
[ [ "1911.00133-2-Figure1-1.png" ] ]
[ "binary label of stress or not stress" ]
333
1709.05413
"How May I Help You?": Modeling Twitter Customer Service Conversations Using Fine-Grained Dialogue Acts
Given the increasing popularity of customer service dialogue on Twitter, analysis of conversation data is essential to understand trends in customer and agent behavior for the purpose of automating customer service interactions. In this work, we develop a novel taxonomy of fine-grained"dialogue acts"frequently observed...
{ "paragraphs": [ [ "The need for real-time, efficient, and reliable customer service has grown in recent years. Twitter has emerged as a popular medium for customer service dialogue, allowing customers to make inquiries and receive instant live support in the public domain. In order to provide useful infor...
{ "answers": [ { "annotation_id": [ "e02902f52907ab54c212407401fe155cd9708319" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Example Twitter Customer Service Conversation" ], "extractive_spans": [], "free_form_answer"...
{ "caption": [ "Table 1: Example Twitter Customer Service Conversation", "Figure 1: Methodology Pipeline", "Figure 2: Proposed Fine-Grained Dialogue Act Taxonomy for Customer Service", "Table 3: Dialogue Act Agreement in Fleiss-κ Bins (from Landis and Koch, 1977)", "Figure 3: Distribution of Annot...
[ "How are customer satisfaction, customer frustration and overall problem resolution data collected?" ]
[ [ "1709.05413-Data Collection-2" ] ]
[ "By annotators on Amazon Mechanical Turk." ]
335
1704.00253
Building a Neural Machine Translation System Using Only Synthetic Parallel Data
Recent works have shown that synthetic parallel data automatically generated by translation models can be effective for various neural machine translation (NMT) issues. In this study, we build NMT systems using only synthetic parallel data. As an efficient alternative to real parallel data, we also present a new type o...
{ "paragraphs": [ [ "Given the data-driven nature of neural machine translation (NMT), the limited source-to-target bilingual sentence pairs have been one of the major obstacles in building competitive NMT systems. Recently, pseudo parallel data, which refer to the synthetic bilingual sentence pairs automat...
{ "answers": [ { "annotation_id": [ "c77fc6eed80a8a59203bd22cb5a34730869fc963" ], "answer": [ { "evidence": [ "While the mixing strategy compensates for most of the gap between the Fr-De* and the Fr*-De (3.01 $\\rightarrow $ 0.17) in the De $\\rightarrow $ F...
{ "caption": [ "Figure 1: The process of building each pseudo parallel corpus group for Czech→ German translation. * indicates the synthetic sentences generated by translation models. PSEUDOsrc and PSEUDOtgt can be made from Czech or German monolingual corpora or from parallel corpora including English, which is ...
[ "How many improvements on the French-German translation benchmark?" ]
[ [ "1704.00253-Results and Analysis-6", "1704.00253-Results and Analysis-7" ] ]
[ "one" ]
336
1909.11833
SIM: A Slot-Independent Neural Model for Dialogue State Tracking
Dialogue state tracking is an important component in task-oriented dialogue systems to identify users' goals and requests as a dialogue proceeds. However, as most previous models are dependent on dialogue slots, the model complexity soars when the number of slots increases. In this paper, we put forward a slot-independ...
{ "paragraphs": [ [ "With the rapid development in deep learning, there is a recent boom of task-oriented dialogue systems in terms of both algorithms and datasets. The goal of task-oriented dialogue is to fulfill a user's requests such as booking hotels via communication in natural language. Due to the com...
{ "answers": [ { "annotation_id": [ "e4b2da7061b31ea9244a9a97fef20a4092b208a8" ], "answer": [ { "evidence": [ "To solve this problem, we need a state tracking model independent of dialogue slots. In other words, the network should depend on the semantic simi...
{ "caption": [ "Figure 1: SIM model structure.", "Table 1: Joint goal and turn request accuracies on WoZ and DSTC2 restaurant reservation datasets.", "Table 2: Model size comparison between SIM and GLAD (Zhong et al., 2018) on WoZ and DSTC2.", "Table 3: Ablation study of SIM on WoZ. We pick the model ...
[ "How do they prevent the model complexity increasing with the increased number of slots?", "How do they measure model size?" ]
[ [ "1909.11833-Introduction-2" ], [ "1909.11833-Experiment ::: Baseline models and result-2" ] ]
[ "They exclude slot-specific parameters and incorporate better feature representation of user utterance and dialogue states using syntactic information and convolutional neural networks (CNN).", "By the number of parameters." ]
338
1804.00079
Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning
A lot of the recent success in natural language processing (NLP) has been driven by distributed vector representations of words trained on large amounts of text in an unsupervised manner. These representations are typically used as general purpose features for words across a range of NLP problems. However, extending th...
{ "paragraphs": [ [ "Transfer learning has driven a number of recent successes in computer vision and NLP. Computer vision tasks like image captioning BIBREF0 and visual question answering typically use CNNs pretrained on ImageNet BIBREF1 , BIBREF2 to extract representations of the image, while several natu...
{ "answers": [ { "annotation_id": [ "99b17a6492c1b08da462afad4d9bf5de5a3e224d" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": ...
{ "caption": [ "Table 1: An approximate number of sentence pairs for each task.", "Figure 1: T-SNE visualizations of our sentence representations on 3 different datasets. SUBJ (left), TREC (middle), DBpedia (right). Dataset details are presented in the Appendix.", "Table 2: Evaluation of sentence represen...
[ "Which model architecture do they for sentence encoding?", "Which data sources do they use?" ]
[ [ "1804.00079-Training Objectives & Evaluation-1" ], [ "1804.00079-5-Table1-1.png" ] ]
[ "Answer with content missing: (Skip-thought vectors-Natural Language Inference paragraphs) The encoder for the current sentence and the decoders for the previous (STP) and next sentence (STN) are typically parameterized as separate RNNs\n- RNN", "- En-Fr (WMT14)\n- En-De (WMT15)\n- Skipthought (BookCorpus)\n- All...
340
1805.09959
A Sentiment Analysis of Breast Cancer Treatment Experiences and Healthcare Perceptions Across Twitter
Background: Social media has the capacity to afford the healthcare industry with valuable feedback from patients who reveal and express their medical decision-making process, as well as self-reported quality of life indicators both during and post treatment. In prior work, [Crannell et. al.], we have studied an active ...
{ "paragraphs": [ [ "Twitter has shown potential for monitoring public health trends, BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , disease surveillance, BIBREF6 , and providing a rich online forum for cancer patients, BIBREF7 . Social media has been validated as an effective educational and support too...
{ "answers": [ { "annotation_id": [ "b19dc7607ae7c9e5e8c50a4e0a8e7428dbc96511" ], "answer": [ { "evidence": [ "Our study focused primarily on English tweets, since this was the language of our diagnostic training sample. Future studies could incorporate othe...
{ "caption": [ "FIG. 1: (left) The distribution of tweets per given user is plotted on a log axis. The tail tends to be high frequency automated accounts, some of which provide daily updates or news related to cancer. (right) A frequency time-series of the tweets collected, binned by day.", "TABLE I: Diagnost...
[ "What machine learning and NLP methods were used to sift tweets relevant to breast cancer experiences?" ]
[ [ "1805.09959-Data Description-6", "1805.09959-Data Description-3", "1805.09959-Data Description-2" ] ]
[ "ML logistic regression classifier combined with a Convolutional Neural Network (CNN) to identify self-reported diagnostic tweets.\nNLP methods: tweet conversion to numeric word vector, removing tweets containing hyperlinks, removing \"retweets\", removing all tweets containing horoscope indicators, lowercasing...
342
2003.12738
Variational Transformers for Diverse Response Generation
Despite the great promise of Transformers in many sequence modeling tasks (e.g., machine translation), their deterministic nature hinders them from generalizing to high entropy tasks such as dialogue response generation. Previous work proposes to capture the variability of dialogue responses with a recurrent neural net...
{ "paragraphs": [ [ "Convolutional and fully-attentional feed-forward architectures, such as Transformers BIBREF0, have emerged as effective alternatives to RNNs BIBREF1 in wide range of NLP tasks. These architectures remove the computational temporal dependency during the training and effectively address t...
{ "answers": [ { "annotation_id": [ "a88de75bc60cab3829d5fcfd51b41290f8c93e87" ], "answer": [ { "evidence": [ "Compare to baseline models, the GVT achieves relatively lower reconstruction PPL, which suggests that the global latent variable contains rich late...
{ "caption": [ "Figure 1: The Global Variational Transformer. During training, The posterior latent variable z by the posterior network is passed to the decoder, while during testing, the target response is absent, and z is replaced by the prior latent variable. The word embeddings, positional encoding, softmax l...
[ "What approach performs better in experiments global latent or sequence of fine-grained latent variables?" ]
[ [ "2003.12738-7-Table1-1.png", "2003.12738-Results ::: Quantitative Analysis-1" ] ]
[ "PPL: SVT\nDiversity: GVT\nEmbeddings Similarity: SVT\nHuman Evaluation: SVT" ]
345
1906.01183
Back Attention Knowledge Transfer for Low-resource Named Entity Recognition
In recent years, great success has been achieved in the field of natural language processing (NLP), thanks in part to the considerable amount of annotated resources. For named entity recognition (NER), most languages do not have such an abundance of labeled data, so the performances of those languages are comparatively...
{ "paragraphs": [ [ "Named entity recognition (NER) is a sequence tagging task that extracts the continuous tokens into specified classes, such as person names, organizations and locations. Current state-of-the-art approaches for NER usually base themselves on long short-term memory recurrent neural network...
{ "answers": [ { "annotation_id": [ "7ee6a452970790d614ad6eed09a3d005ad3aca0f" ], "answer": [ { "evidence": [ "Attention-base translation model We use the system of BIBREF6 , a convolutional sequence to sequence model. It divides translation process into two...
{ "caption": [ "Figure 1: The architecture of BAN. The source sentences are translated into English and recorded the attention weights. Then the sentences are put into English NER model. After acquiring the outputs of BiLSTM in the English model, we use back attention mechanism to obtain transfer knowledge to aid...
[ "Which pre-trained English NER model do they use?" ]
[ [ "1906.01183-Experimental Setup-0", "1906.01183-Pre-trained Translation and NER Model-3" ] ]
[ "Bidirectional LSTM based NER model of Flair" ]
349
1909.06522
Multilingual Graphemic Hybrid ASR with Massive Data Augmentation
Towards developing high-performing ASR for low-resource languages, approaches to address the lack of resources are to make use of data from multiple languages, and to augment the training data by creating acoustic variations. In this work we present a single grapheme-based ASR model learned on 7 geographically proximal...
{ "paragraphs": [ [ "It can be challenging to build high-accuracy automatic speech recognition (ASR) systems in real world due to the vast language diversity and the requirement of extensive manual annotations on which the ASR algorithms are typically built. Series of research efforts have thus far been foc...
{ "answers": [ { "annotation_id": [ "a68e54ccc7c2f2543dcb11b0c1cbc4aef3b976ba" ], "answer": [ { "evidence": [], "extractive_spans": [], "free_form_answer": "", "highlighted_evidence": [], "unanswerable": true, "yes_no": ...
{ "caption": [ "Table 1. The amounts of audio data in hours.", "Table 2. WER results on each video dataset. Frequency masking is denoted by fm, speed perturbation by sp, and additive noise (Section 3.2) by noise. 3lang, 4lang and 7lang denote the multilingual ASR models trained on 3, 4 and 7 languages, respec...
[ "How much of the ASR grapheme set is shared between languages?" ]
[ [ "1909.06522-Experiments ::: Data-6" ] ]
[ "Little overlap except common basic Latin alphabet and that Hindi and Marathi languages use same script." ]
350
1909.12642
HateMonitors: Language Agnostic Abuse Detection in Social Media
Reducing hateful and offensive content in online social media pose a dual problem for the moderators. On the one hand, rigid censorship on social media cannot be imposed. On the other, the free flow of such content cannot be allowed. Hence, we require efficient abusive language detection system to detect such harmful c...
{ "paragraphs": [ [ "In social media, abusive language denotes a text which contains any form of unacceptable language in a post or a comment. Abusive language can be divided into hate speech, offensive language and profanity. Hate speech is a derogatory comment that hurts an entire group in terms of ethnic...
{ "answers": [ { "annotation_id": [ "250d0c947b58d1efcf26cec7982d76f573520cc2" ], "answer": [ { "evidence": [ "The performance of our models across different languages for sub-task A are shown in table TABREF19. Our model got the first position in the German...
{ "caption": [ "Table 1. This table shows the initial statistics about the training and test data", "Fig. 1. Architecture of our system", "Table 2. This table gives the language wise result of sub-task A by comparing the macro F1 values", "Table 3. This table gives the language wise result of sub-task...
[ "What are the languages used to test the model?" ]
[ [ "1909.12642-Discussion-0", "1909.12642-Dataset and Task description-0" ] ]
[ "Hindi, English and German (German task won)" ]
351
1902.10525
Fast Multi-language LSTM-based Online Handwriting Recognition
We describe an online handwriting system that is able to support 102 languages using a deep neural network architecture. This new system has completely replaced our previous Segment-and-Decode-based system and reduced the error rate by 20%-40% relative for most languages. Further, we report new state-of-the-art results...
{ "paragraphs": [ [ "In this paper we discuss online handwriting recognition: Given a user input in the form of an ink, i.e. a list of touch or pen strokes, output the textual interpretation of this input. A stroke is a sequence of points INLINEFORM0 with position INLINEFORM1 and timestamp INLINEFORM2 .", ...
{ "answers": [ { "annotation_id": [ "cd6aea1f1b35ea50d820594d98f12de4a601e545" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 9 Character error rates on the validation data using successively more of the system components described above for English (...
{ "caption": [ "Fig. 1 Example inputs for online handwriting recognition in different languages. See text for details.", "Table 1 List of languages supported in our system grouped by script.", "Fig. 2 An overview our recognition models. In our architecture the input representation is passed through one or...
[ "Which language has the lowest error rate reduction?" ]
[ [ "1902.10525-11-Table9-1.png" ] ]
[ "thai" ]
352
1912.05238
BERT has a Moral Compass: Improvements of ethical and moral values of machines
Allowing machines to choose whether to kill humans would be devastating for world peace and security. But how do we equip machines with the ability to learn ethical or even moral choices? Jentzsch et al.(2019) showed that applying machine learning to human texts can extract deontological ethical reasoning about "right"...
{ "paragraphs": [ [ "There is a broad consensus that artificial intelligence (AI) research is progressing steadily, and that its impact on society is likely to increase. From self-driving cars on public streets to self-piloting, reusable rockets, AI systems tackle more and more complex human activities in a...
{ "answers": [ { "annotation_id": [ "ed6630cf594af9ecb6ba6ba4e77e543ec347a640" ], "answer": [ { "evidence": [ "Dos and Don'ts for the Moral Choice Machine. The verb extraction identifies the most positive and most negative associated verbs in vocabulary, to ...
{ "caption": [ "Figure 1: BERT has a moral dimension: PCA of its embeddings projected to 2D. The top PC is the x axis, its moral dimension m.", "Figure 2: Correlation of moral bias score and WEAT Value for general Dos and Don’ts. (Blue line) Correlation, the Pearson’s Correlation Coefficient using USE as embe...
[ "How is moral bias measured?" ]
[ [ "1912.05238-Human-like Moral Choices from Human Text-1", "1912.05238-Human-like Moral Choices from Human Text-2" ] ]
[ "Answer with content missing: (formula 1) bias(q, a, b) = cos(a, q) − cos(b, q)\nBias is calculated as substraction of cosine similarities of question and some answer for two opposite answers." ]
353
2002.11268
A Density Ratio Approach to Language Model Fusion in End-to-End Automatic Speech Recognition
This article describes a density ratio approach to integrating external Language Models (LMs) into end-to-end models for Automatic Speech Recognition (ASR). Applied to a Recurrent Neural Network Transducer (RNN-T) ASR model trained on a given domain, a matched in-domain RNN-LM, and a target domain RNN-LM, the proposed ...
{ "paragraphs": [ [ "End-to-end models such as Listen, Attend & Spell (LAS) BIBREF0 or the Recurrent Neural Network Transducer (RNN-T) BIBREF1 are sequence models that directly define $P(W | X)$, the posterior probability of the word or subword sequence $W$ given an audio frame sequence $X$, with no chainin...
{ "answers": [ { "annotation_id": [ "2c01c2f087320332635fe50162452442ce58ef42" ], "answer": [ { "evidence": [ "The Density Ratio method consistently outperformed Shallow Fusion for the cross-domain scenarios examined, with and without fine-tuning to audio da...
{ "caption": [ "Fig. 1. Estimating a target domain pseudo-posterior via combination of source domain RNN-T, source domain RNN-LM, and target domain RNN-LM.", "Fig. 2. Dev set WERs for Shallow Fusion LM scaling factor λ vs. sequence length scaling factor β.", "Table 1. Training set size and test set perple...
[ "What metrics are used for evaluation?", "How much training data is used?" ]
[ [ "2002.11268-Discussion-1" ], [ "2002.11268-Training, development and evaluation data ::: Training data-2", "2002.11268-Training, development and evaluation data ::: Training data-4", "2002.11268-Training, development and evaluation data ::: Training data-1", "2002.11268-Training, developme...
[ "word error rate", "163,110,000 utterances" ]
356
1905.13497
Attention Is (not) All You Need for Commonsense Reasoning
The recently introduced BERT model exhibits strong performance on several language understanding benchmarks. In this paper, we describe a simple re-implementation of BERT for commonsense reasoning. We show that the attentions produced by BERT can be directly utilized for tasks such as the Pronoun Disambiguation Problem...
{ "paragraphs": [ [ "Recently, neural models pre-trained on a language modeling task, such as ELMo BIBREF0 , OpenAI GPT BIBREF1 , and BERT BIBREF2 , have achieved impressive results on various natural language processing tasks such as question-answering and natural language inference. The success of BERT ca...
{ "answers": [ { "annotation_id": [ "25f1b9e8397c9e2edbfad470ba6231c717c2ca45" ], "answer": [ { "evidence": [ "Recently, neural models pre-trained on a language modeling task, such as ELMo BIBREF0 , OpenAI GPT BIBREF1 , and BERT BIBREF2 , have achieved impre...
{ "caption": [ "Figure 1: Maximum Attention Score (MAS) for a particular sentence, where colors show attention maps for different words (best shown in color). Squares with blue/red frames correspond to specific sliced attentions Ac for candidates c, establishing the relationship to the reference pronoun indicated...
[ "How does their model differ from BERT?" ]
[ [ "1905.13497-BERT Model Details-0" ] ]
[ "Their model does not differ from BERT." ]
358
1909.13668
On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation
Variational Autoencoders (VAEs) are known to suffer from learning uninformative latent representation of the input due to issues such as approximated posterior collapse, or entanglement of the latent space. We impose an explicit constraint on the Kullback-Leibler (KL) divergence term inside the VAE objective function. ...
{ "paragraphs": [ [ "Despite the recent success of deep generative models such as Variational Autoencoders (VAEs) BIBREF0 and Generative Adversarial Networks (GANs) BIBREF1 in different areas of Machine Learning, they have failed to produce similar generative quality in NLP. In this paper we focus on VAEs a...
{ "answers": [ { "annotation_id": [ "9a030eb6914f2b1e7de542c2c50aa6cf8907545c" ], "answer": [ { "evidence": [ "We conduct various experiments to illustrate the properties that are encouraged via different KL magnitudes. In particular, we revisit the interdep...
{ "caption": [ "Figure 1: Rate-Distortion and LogDetCov for C = {10, 20, ..., 100} on Yahoo and Yelp corpora.", "Table 1: βC-VAELSTM performance with C = {3, 15, 100} on the test sets of CBT, WIKI, and WebText. Each bucket groups sentences of certain length. Bucket 1: length ≤ 10; Bucket 2: 10 < length ≤ 20; ...
[ "How does explicit constraint on the KL divergence term that authors propose looks like?" ]
[ [ "1909.13668-Kullback-Leibler Divergence in VAE ::: Explicit KL Control via @!START@$\\beta $@!END@-VAE-1", "1909.13668-Kullback-Leibler Divergence in VAE ::: Explicit KL Control via @!START@$\\beta $@!END@-VAE-0" ] ]
[ "Answer with content missing: (Formula 2) Formula 2 is an answer: \n\\big \\langle\\! \\log p_\\theta({x}|{z}) \\big \\rangle_{q_\\phi({z}|{x})} - \\beta |D_{KL}\\big(q_\\phi({z}|{x}) || p({z})\\big)-C|" ]
360
1802.05322
Classifying movie genres by analyzing text reviews
This paper proposes a method for classifying movie genres by only looking at text reviews. The data used are from Large Movie Review Dataset v1.0 and IMDb. This paper compared a K-nearest neighbors (KNN) model and a multilayer perceptron (MLP) that uses tf-idf as input features. The paper also discusses different evalu...
{ "paragraphs": [ [ "By only reading a single text review of a movie it can be difficult to say what the genre of that movie is, but by using text mining techniques on thousands of movie reviews is it possible to predict the genre?", "This paper explores the possibility of classifying genres of a movi...
{ "answers": [ { "annotation_id": [ "602b6f6182ba06c3ae6b17680b5b8b0f500196c9" ], "answer": [ { "evidence": [ "This paper experimented with two different models and compared them against each other. The inspiration for the first model comes from BIBREF1 in t...
{ "caption": [ "Table 1: List of basic terminology.", "Figure 1: Histogram showing the distribution of genres.", "Figure 2: Histogram showing the distribution of genres per review.", "Table 4: accuracy, precisionmicro and recallmicro for the models.", "Table 3: Values of non-default parameters for...
[ "what was the baseline?" ]
[ [ "1802.05322-Model-0" ] ]
[ "There is no baseline." ]
364
2004.01878
News-Driven Stock Prediction With Attention-Based Noisy Recurrent State Transition
We consider direct modeling of underlying stock value movement sequences over time in the news-driven stock movement prediction. A recurrent state transition model is constructed, which better captures a gradual process of stock movement continuously by modeling the correlation between past and future price movements. ...
{ "paragraphs": [ [ "Stock movement prediction is a central task in computational and quantitative finance. With recent advances in deep learning and natural language processing technology, event-driven stock prediction has received increasing research attention BIBREF0, BIBREF1. The goal is to predict the ...
{ "answers": [ { "annotation_id": [ "26a45d5e989bebbf93de405fb8fe347fca8ae71d" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 1: Statistics of the datasets." ], "extractive_spans": [], "free_form_answer": "553,451 documen...
{ "caption": [ "Figure 1: Example of news impacts on 3M Company. Over the first and the second periods (from Oct. 24 to Nov. 1, 2006 and from Sep. 21 to Oct. 1, 2007), there was only one event. In the third period (from Nov. 10 to Nov. 18, 2008), there were two events affecting the stock price movements simultane...
[ "How big is dataset used?" ]
[ [ "2004.01878-6-Table1-1.png" ] ]
[ "553,451 documents" ]
365
1905.07471
Learning Open Information Extraction of Implicit Relations from Reading Comprehension Datasets
The relationship between two entities in a sentence is often implied by word order and common sense, rather than an explicit predicate. For example, it is evident that"Fed chair Powell indicates rate hike"implies (Powell, is a, Fed chair) and (Powell, works for, Fed). These tuples are just as significant as the explici...
{ "paragraphs": [ [ "Open Information Extraction (OpenIE) is the NLP task of generating (subject, relation, object) tuples from unstructured text e.g. “Fed chair Powell indicates rate hike” outputs (Powell, indicates, rate hike). The modifier open is used to contrast IE research in which the relation belong...
{ "answers": [ { "annotation_id": [ "26df92ca3004b2f750fbff14cd0d2b5a611fdbee" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Figure 2: PR curve on our implicit tuples dataset." ], "extractive_spans": [], "free_form_answer": "T...
{ "caption": [ "Table 1: Dataset statistics.", "Figure 1: Tuple conversion and alignment process flow.", "Figure 2: PR curve on our implicit tuples dataset.", "Figure 3: PR curve on the explicit tuples dataset." ], "file": [ "3-Table1-1.png", "3-Figure1-1.png", "4-Figure2-1.png", "...
[ "How much better does this baseline neural model do?" ]
[ [ "1905.07471-4-Figure2-1.png" ] ]
[ "The model outperforms at every point in the\nimplicit-tuples PR curve reaching almost 0.8 in recall" ]
366
1603.00968
MGNC-CNN: A Simple Approach to Exploiting Multiple Word Embeddings for Sentence Classification
We introduce a novel, simple convolution neural network (CNN) architecture - multi-group norm constraint CNN (MGNC-CNN) that capitalizes on multiple sets of word embeddings for sentence classification. MGNC-CNN extracts features from input embedding sets independently and then joins these at the penultimate layer in th...
{ "paragraphs": [ [ "Neural models have recently gained popularity for Natural Language Processing (NLP) tasks BIBREF0 , BIBREF1 , BIBREF2 . For sentence classification, in particular, Convolution Neural Networks (CNN) have realized impressive performance BIBREF3 , BIBREF4 . These models operate over word e...
{ "answers": [ { "annotation_id": [ "271b515571b41377124029ee4375c5ac5f08a926" ], "answer": [ { "evidence": [ "Our approach enjoys the following advantages compared to the only existing comparable model BIBREF11 : (i) It can leverage diverse, readily availab...
{ "caption": [ "Figure 1: Illustration of MG-CNN and MGNC-CNN. The filters applied to the respective embeddings are completely independent. MG-CNN applies a max norm constraint to o, while MGNC-CNN applies max norm constraints on o1 and o2 independently (group regularization). Note that one may easily extend the ...
[ "What are the baseline models?", "By how much of MGNC-CNN out perform the baselines?" ]
[ [ "1603.00968-4-Table1-1.png", "1603.00968-Setup-0" ], [ "1603.00968-4-Table1-1.png", "1603.00968-Results and Discussion-1" ] ]
[ "MC-CNN\nMVCNN\nCNN", "In terms of Subj the Average MGNC-CNN is better than the average score of baselines by 0.5. Similarly, Scores of SST-1, SST-2, and TREC where MGNC-CNN has similar improvements. \nIn case of Irony the difference is about 2.0. \n" ]
368
2004.01980
Hooks in the Headline: Learning to Generate Headlines with Controlled Styles
Current summarization systems only produce plain, factual headlines, but do not meet the practical needs of creating memorable titles to increase exposure. We propose a new task, Stylistic Headline Generation (SHG), to enrich the headlines with three style options (humor, romance and clickbait), in order to attract mor...
{ "paragraphs": [ [ "Every good article needs a good title, which should not only be able to condense the core meaning of the text, but also sound appealing to the readers for more exposure and memorableness. However, currently even the best Headline Generation (HG) system can only fulfill the above require...
{ "answers": [ { "annotation_id": [ "4b49212f42e25384c3a9f1535f1d64f8056fc0dc" ], "answer": [ { "evidence": [ "FLOAT SELECTED: Table 2: Human evaluation on three aspects: relevance, attraction, and fluency. “None” represents the original headlines in the dat...
{ "caption": [ "Figure 1: Given a news article, current HG models can only generate plain, factual headlines, failing to learn from the original human reference. It is also much less attractive than the headlines with humorous, romantic and click-baity styles.", "Figure 2: The Transformer-based architecture o...
[ "What is increase in percentage of humor contained in headlines generated with TitleStylist method (w.r.t. baselines)?" ]
[ [ "2004.01980-7-Table2-1.png", "2004.01980-Results and Discussion ::: Human Evaluation Results-0" ] ]
[ "Humor in headlines (TitleStylist vs Multitask baseline):\nRelevance: +6.53% (5.87 vs 5.51)\nAttraction: +3.72% (8.93 vs 8.61)\nFluency: 1,98% (9.29 vs 9.11)" ]
369
1809.08510
Towards Language Agnostic Universal Representations
When a bilingual student learns to solve word problems in math, we expect the student to be able to solve these problem in both languages the student is fluent in,even if the math lessons were only taught in one language. However, current representations in machine learning are language dependent. In this work, we pres...
{ "paragraphs": [ [ "Anecdotally speaking, fluent bilingual speakers rarely face trouble translating a task learned in one language to another. For example, a bilingual speaker who is taught a math problem in English will trivially generalize to other known languages. Furthermore there is a large collection...
{ "answers": [ { "annotation_id": [ "2772affc98683a66d3dcce5f2de60e09b6766a71" ], "answer": [ { "evidence": [ "To test this hypothesis we first trained UG-WGAN in English, Chinese and German following the procedure described in Section \"UG-WGAN\" . The embe...
{ "caption": [ "Figure 1: Architecture of UG-WGAN. The amount of languages can be trivially increased by increasing the number of language agnostic segments kj and ej .", "Figure 2: Ablation study of λ. Both Wasserstein and Perplexity estimates were done on a held out test set of documents.", "Figure 3: T...
[ "What are the languages they consider in this paper?", "Did they experiment with tasks other than word problems in math?" ]
[ [ "1809.08510-NLI-1", "1809.08510-Sentiment Analysis-0", "1809.08510-Discussion-1" ], [ "1809.08510-Sentiment Analysis-0", "1809.08510-NLI-0" ] ]
[ "The languages considered were English, Chinese, German, Russian, Arabic, Spanish, French", "They experimented with sentiment analysis and natural language inference task" ]
371
1804.08139
Same Representation, Different Attentions: Shareable Sentence Representation Learning from Multiple Tasks
Distributed representation plays an important role in deep learning based natural language processing. However, the representation of a sentence often varies in different tasks, which is usually learned from scratch and suffers from the limited amounts of training data. In this paper, we claim that a good sentence repr...
{ "paragraphs": [ [ "The distributed representation plays an important role in deep learning based natural language processing (NLP) BIBREF0 , BIBREF1 , BIBREF2 . On word level, many successful methods have been proposed to learn a good representation for single word, which is also called word embedding, su...
{ "answers": [ { "annotation_id": [ "7ffaf78b75616ebcfde905144671f6df281b64eb" ], "answer": [ { "evidence": [ "Table TABREF34 shows the performances of the different methods. From the table, we can see that the performances of most tasks can be improved with...
{ "caption": [ "Figure 1: Three schemes of information sharing in multi-task leaning. (a) stacked shared-private scheme, (b) parallel shared-private scheme, (c) our proposed attentive sharing scheme.", "Figure 2: Static Task-Attentive Sentence Encoding", "Figure 3: Dynamic Task-Attentive Sentence Encoding...
[ "What evaluation metrics are used?" ]
[ [ "1804.08139-Exp I: Sentiment Classification-10", "1804.08139-5-Table2-1.png" ] ]
[ "Accuracy on each dataset and the average accuracy on all datasets." ]
373
1808.08850
WiSeBE: Window-based Sentence Boundary Evaluation
Sentence Boundary Detection (SBD) has been a major research topic since Automatic Speech Recognition transcripts have been used for further Natural Language Processing tasks like Part of Speech Tagging, Question Answering or Automatic Summarization. But what about evaluation? Do standard evaluation metrics like precisi...
{ "paragraphs": [ [ "The goal of Automatic Speech Recognition (ASR) is to transform spoken data into a written representation, thus enabling natural human-machine interaction BIBREF0 with further Natural Language Processing (NLP) tasks. Machine translation, question answering, semantic parsing, POS tagging,...
{ "answers": [ { "annotation_id": [ "e10bf13313ef392947f468f04f9da956e8e27efc" ], "answer": [ { "evidence": [ "We focused evaluation over a small but diversified dataset composed by 10 YouTube videos in the English language in the news context. The selected ...
{ "caption": [], "file": [] }
[ "What kind of Youtube video transcripts did they use?", "What makes it a more reliable metric?" ]
[ [ "1808.08850-Dataset-0" ], [ "1808.08850-F1 mean F1_{mean} vs. WiSeBEWiSeBE-0", "1808.08850-Conclusions-0" ] ]
[ "youtube video transcripts on news covering different topics like technology, human rights, terrorism and politics", "It takes into account the agreement between different systems" ]
375
1909.02560
Adversarial Examples with Difficult Common Words for Paraphrase Identification
Despite the success of deep models for paraphrase identification on benchmark datasets, these models are still vulnerable to adversarial examples. In this paper, we propose a novel algorithm to generate a new type of adversarial examples to study the robustness of deep paraphrase identification models. We first sample ...
{ "paragraphs": [ [ "Paraphrase identification is to determine whether a pair of sentences are paraphrases of each other BIBREF0. It is important for applications such as duplicate post matching on social media BIBREF1, plagiarism detection BIBREF2, and automatic evaluation for machine translation BIBREF3 o...
{ "answers": [ { "annotation_id": [ "681a882f9993ee910dcfa813a103b6ff2967c52d" ], "answer": [ { "evidence": [ "Adversarial training can often improve model robustness BIBREF25, BIBREF27. We also fine-tune the target models using adversarial training. At each...
{ "caption": [ "Figure 1: Two examples with labels matched and unmatched respectively, originally from the Quora Question Pairs corpus (Iyer, Dandekar, and Csernai, 2017). “(P)” and “(Q)” are original sentences, and “(P’)” and “(Q’)” are adversarially modified sentences. Modified words are highlighted in bold. “O...
[ "How much in experiments is performance improved for models trained with generated adversarial examples?" ]
[ [ "1909.02560-Experiments ::: Adversarial Training-1", "1909.02560-Experiments ::: Adversarial Training-0" ] ]
[ "Answer with content missing: (Table 1) The performance of all the target models raises significantly, while that on the original\nexamples remain comparable (e.g. the overall accuracy of BERT on modified examples raises from 24.1% to 66.0% on Quora)" ]
376
2001.02380
A Neural Approach to Discourse Relation Signal Detection
Previous data-driven work investigating the types and distributions of discourse relation signals, including discourse markers such as 'however' or phrases such as 'as a result' has focused on the relative frequencies of signal words within and outside text from each discourse relation. Such approaches do not allow us ...
{ "paragraphs": [ [ "The development of formal frameworks for the analysis of discourse relations has long gone hand in hand with work on signaling devices. The analysis of discourse relations is also closely tied to what a discourse structure should look like and what discourse goals should be fulfilled in...
{ "answers": [ { "annotation_id": [ "bc99e782390d90ced46c5f522324a7aad5e1e4e4" ], "answer": [ { "evidence": [ "We reason that, if a token is important for predicting the correct label, masking it will degrade the model's classification accuracy, or at least ...
{ "caption": [ "Table 1: Russian discourse relation signals, reproduced from Toldova et al. (2017).", "Table 2: RST relations and their frequencies in the GUM corpus.", "Figure 1: A visualization of an RST analysis of (4) with the signal tokens highlighted.", "Table 3: An overview of the taxonomy and ...
[ "How is the delta-softmax calculated?" ]
[ [ "2001.02380-Automatic Signal Extraction ::: A Contextualized Neural Model ::: Signaling Metric-7", "2001.02380-Automatic Signal Extraction ::: A Contextualized Neural Model ::: Signaling Metric-6" ] ]
[ "Answer with content missing: (Formula) Formula is the answer." ]
378
1809.02494
Meteorologists and Students: A resource for language grounding of geographical descriptors
We present a data resource which can be useful for research purposes on language grounding tasks in the context of geographical referring expression generation. The resource is composed of two data sets that encompass 25 different geographical descriptors and a set of associated graphical representations, drawn as poly...
{ "paragraphs": [ [ "Language grounding, i.e., understanding how words and expressions are anchored in data, is one of the initial tasks that are essential for the conception of a data-to-text (D2T) system BIBREF0 , BIBREF1 . This can be achieved through different means, such as using heuristics or machine ...
{ "answers": [ { "annotation_id": [ "28b6cedb469ce68555dbd02a6518fc81ea4b3068" ], "answer": [ { "evidence": [ "The resource is composed of data from two different surveys. In both surveys subjects were asked to draw on a map (displayed under a Mercator proje...
{ "caption": [ "Figure 1: Snapshot of the version of the survey answered by the meteorologists (translated from Spanish).", "Figure 3: Representation of polygon drawings by experts and associated contour plot showing the percentage of overlapping answers for “Eastern Galicia”.", "Table 1: List of geograph...
[ "Which two datasets does the resource come from?" ]
[ [ "1809.02494-The resource and its interest-2", "1809.02494-The resource and its interest-1", "1809.02494-The resource and its interest-0" ] ]
[ "two surveys by two groups - school students and meteorologists to draw on a map a polygon representing a given geographical descriptor" ]
381
1909.07734
SocialNLP EmotionX 2019 Challenge Overview: Predicting Emotions in Spoken Dialogues and Chats
We present an overview of the EmotionX 2019 Challenge, held at the 7th International Workshop on Natural Language Processing for Social Media (SocialNLP), in conjunction with IJCAI 2019. The challenge entailed predicting emotions in spoken and chat-based dialogues using augmented EmotionLines datasets. EmotionLines con...
{ "paragraphs": [ [ "Emotions are a central component of our existence as human beings, and are manifested by physiological and psychological changes that often affect behavior and action. Emotions involve a complicated interplay of mind, body, language, and culture BIBREF0.", "Detecting and recognizi...
{ "answers": [ { "annotation_id": [ "28e5993bee909e14b3ce914e076195ded918a615" ], "answer": [ { "evidence": [ "BIBREF9 Two different BERT models were developed. For Friends, pre-training was done using a sliding window of two utterances to provide dialogue c...
{ "caption": [ "Table 1: Reliability of Agreement (κ)", "Table 2: Emotion Label Distribution", "Table 3: Dialogue Length Distribution and Number of Utterances", "Table 4: Example of Augmented Utterance", "Table 5: Dialogue Excerpts from Friends (top) and EmotionPush (bottom)", "Table 6: F-scor...
[ "What is the size of the second dataset?", "How large is the first dataset?", "Who was the top-scoring team?" ]
[ [ "1909.07734-Datasets-0" ], [ "1909.07734-Datasets-0" ], [ "1909.07734-5-Table7-1.png", "1909.07734-4-Table6-1.png", "1909.07734-Results-0" ] ]
[ "1 000 labeled dialogues for training and 240 unlabeled dialogues for evaluation", "1 000 labeled dialogues for training and 240 unlabeled dialogues for evaluation", "IDEA" ]
382