id stringlengths 10 10 | title stringlengths 19 145 | abstract stringlengths 273 1.91k | full_text dict | qas dict | figures_and_tables dict | question list | retrieval_gt list | answer_gt list | __index_level_0__ int64 0 887 |
|---|---|---|---|---|---|---|---|---|---|
2003.04707 | Neuro-symbolic Architectures for Context Understanding | Computational context understanding refers to an agent's ability to fuse disparate sources of information for decision-making and is, therefore, generally regarded as a prerequisite for sophisticated machine reasoning capabilities, such as in artificial intelligence (AI). Data-driven and knowledge-driven methods are tw... | {
"paragraphs": [
[
"Context understanding is a natural property of human cognition, that supports our decision-making capabilities in complex sensory environments. Humans are capable of fusing information from a variety of modalities|e.g., auditory, visual|in order to perform different tasks, ranging from ... | {
"answers": [
{
"annotation_id": [
"43aa9c9aec71e4345b87b4c2827f95b401ceeb1a",
"7bb9f8680341f126321baa89a9994b6e2424f617"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
... | {
"caption": [
"Figure 1. Scene Ontology: (a) formal definition of a Scene, and (b) a subset of Features-of-Interests and events defined within a taxonomy.",
"Figure 2. 2D visualizations of KGEs of NuScenes instances generated from the TransE algorithm.",
"Figure 3. Evaluation results of the NuScenes data... | [
"How do they interpret the model?"
] | [
[
"2003.04707-Applications of Neuro-symbolism ::: Application II: Neural Question-Answering using Commonsense Knowledge Bases ::: Knowledge elicitation-0"
]
] | [
"They find relations that connect questions to the answer-options."
] | 348 |
1605.05166 | Digital Stylometry: Linking Profiles Across Social Networks | There is an ever growing number of users with accounts on multiple social media and networking sites. Consequently, there is increasing interest in matching user accounts and profiles across different social networks in order to create aggregate profiles of users. In this paper, we present models for Digital Stylometry... | {
"paragraphs": [
[
"Stylometry is defined as, \"the statistical analysis of variations in literary style between one writer or genre and another\". It is a centuries-old practice, dating back the early Renaissance. It is most often used to attribute authorship to disputed or anonymous documents. Stylometry... | {
"answers": [
{
"annotation_id": [
"17afde77c24d8fc689f8d2ff8304eb3b2d91cc46",
"5ee73fd0c6d68e3a4fa98bf8db02c24145db6b50"
],
"answer": [
{
"evidence": [
"Motivated by traditional stylometry and the growing interest in matching user accounts across I... | {
"caption": [
"Fig. 1: Distribution of number of posts per user for Twitter and Facebook, from our collected dataset.",
"Table 2: Performance of different linguistic models, tested on 5,612 users (11,224 accounts), sorted by accuracy. Best results are shown bold.",
"Table 3: Performance of different temp... | [
"what elements of each profile did they use?"
] | [
[
"1605.05166-Introduction-2"
]
] | [
"No profile elements"
] | 349 |
1607.00167 | SentiBubbles: Topic Modeling and Sentiment Visualization of Entity-centric Tweets | Social Media users tend to mention entities when reacting to news events. The main purpose of this work is to create entity-centric aggregations of tweets on a daily basis. By applying topic modeling and sentiment analysis, we create data visualization insights about current events and people reactions to those events ... | {
"paragraphs": [
[
"Entities play a central role in the interplay between social media and online news BIBREF0 . Everyday millions of tweets are generated about local and global news, including people reactions and opinions regarding the events displayed on those news stories. Trending personalities, organ... | {
"answers": [
{
"annotation_id": [
"2fcc6e42f3e4b56dbba7f8426bae3fde3dc1c735",
"ee5104d9d87ff5755f3996fd2814d336355393a1"
],
"answer": [
{
"evidence": [
"Figure 1 depicts an overview of the data mining process pipeline applied in this work. To colle... | {
"caption": [
"Figure 1: The system pipeline"
],
"file": [
"2-Figure1-1.png"
]
} | [
"What model was used for sentiment analysis?"
] | [
[
"1607.00167-Sentiment Analysis-0",
"1607.00167-Introduction-2"
]
] | [
"Lexicon based word-level SA."
] | 350 |
1605.05156 | Tweet Acts: A Speech Act Classifier for Twitter | Speech acts are a way to conceptualize speech as action. This holds true for communication on any platform, including social media platforms such as Twitter. In this paper, we explored speech act recognition on Twitter by treating it as a multi-class classification problem. We created a taxonomy of six speech acts for ... | {
"paragraphs": [
[
"In recent years, the micro-blogging platform Twitter has become a major social media platform with hundreds of millions of users. People turn to Twitter for a variety of purposes, from everyday chatter to reading about breaking news. The volume plus the public nature of Twitter (less th... | {
"answers": [
{
"annotation_id": [
"6817fd5a23e6fe7235ae2cddfed7e8b1c6dc1149",
"d396f8ba689f567527244647e543f3bc6b3ae45f"
],
"answer": [
{
"evidence": [
"Semantic Features",
"Opinion Words: We used the \"Harvard General Inquirer\" lexico... | {
"caption": [
"Figure 1: Distribution of speech acts for all six topics and three types.",
"Table 1: Example tweets for each speech act type.",
"Figure 2: The dependency tree and the part of speech tags of a sample tweet.",
"Table 2: F1 scores for each speech act category. The best scores for each ca... | [
"What syntactic and semantic features are proposed?",
"what are the proposed semantic features?",
"what syntactic features are proposed?",
"what datasets were used?"
] | [
[
"1605.05156-Semantic Features-2",
"1605.05156-Semantic Features-1",
"1605.05156-Syntactic Features-2",
"1605.05156-Syntactic Features-0",
"1605.05156-Semantic Features-0",
"1605.05156-Semantic Features-4",
"1605.05156-Syntactic Features-5",
"1605.05156-Semantic Features-5",
"16... | [
"Semantic Features : Opinion Words, Vulgar Words, Emoticons, Speech Act Verbs, N-grams.\nSyntactic Features: Punctuations, Twitter-specific Characters, Abbreviations, Dependency Sub-trees, Part-of-speech.",
"Binary features indicating opinion words, vulgar words, emoticons, speech act verbs and unigram, bigram... | 352 |
1804.05306 | Transcribing Lyrics From Commercial Song Audio: The First Step Towards Singing Content Processing | Spoken content processing (such as retrieval and browsing) is maturing, but the singing content is still almost completely left out. Songs are human voice carrying plenty of semantic information just as speech, and may be considered as a special type of speech with highly flexible prosody. The various problems in song ... | {
"paragraphs": [
[
"The exploding multimedia content over the Internet, has created a new world of spoken content processing, for example the retrieval BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , browsing BIBREF5 , summarization BIBREF0 , BIBREF5 , BIBREF6 , BIBREF7 , and comprehension BIBREF8 , BIBR... | {
"answers": [
{
"annotation_id": [
"19240a2f30d9fe3be28a4c512222820af3fda419",
"20eef34bd5379ab896d0fd373afda6afa13f061a"
],
"answer": [
{
"evidence": [
"Row(1) is for Model A in Fig. FIGREF5 taken as the baseline, which was trained on LibriSpeech d... | {
"caption": [
"Table 1. Information of training and testing sets in vocal data. The lengths are all measured in minutes.",
"Fig. 1. The overall structure for training the acoustic models.",
"Fig. 2. Approaches for prolonged vowels: (a) extended lexicon (vowels can be repeated or not), (b) increased self-... | [
"What was the baseline?"
] | [
[
"1804.05306-Recognition Results-1"
]
] | [
"a model trained on LibriSpeech data with SAT and a with a LM also trained with LibriSpeech"
] | 353 |
1802.02614 | Enhance word representation for out-of-vocabulary on Ubuntu dialogue corpus | Ubuntu dialogue corpus is the largest public available dialogue corpus to make it feasible to build end-to-end deep neural network models directly from the conversation data. One challenge of Ubuntu dialogue corpus is the large number of out-of-vocabulary words. In this paper we proposed a method which combines the gen... | {
"paragraphs": [
[
"The ability for a machine to converse with human in a natural and coherent manner is one of challenging goals in AI and natural language understanding. One problem in chat-oriented human-machine dialog system is to reply a message within conversation contexts. Existing methods can be di... | {
"answers": [
{
"annotation_id": [
"8b0671b4e70e2a30dda1425bd9a0e5a1ecaae7b2",
"cc081aa72bd311eab1c803937556fc6c0110ce09"
],
"answer": [
{
"evidence": [
"It can be observed that the performance is significantly degraded without two special tags. In ... | {
"caption": [
"Figure 1: A high-level overview of ESIM layout. Compared with the original one in (Chen et al., 2017), the diagram addes character-level embedding and replaces average pooling by LSTM last state summary vector.",
"Table 1: Statistics of the Ubuntu Dialogue Corpus (V2).",
"Table 2: Statisti... | [
"how does end of utterance and token tags affect the performance",
"what kind of conversations are in the douban conversation corpus?"
] | [
[
"1802.02614-9-Table7-1.png",
"1802.02614-The roles of utterance and turn tags-1"
],
[
"1802.02614-Dataset-1"
]
] | [
"The performance is significantly degraded without two special tags (0,025 in MRR)",
"Conversations from popular social networking service in China"
] | 354 |
1910.12203 | Do Sentence Interactions Matter? Leveraging Sentence Level Representations for Fake News Classification | The rising growth of fake news and misleading information through online media outlets demands an automatic method for detecting such news articles. Of the few limited works which differentiate between trusted vs other types of news article (satire, propaganda, hoax), none of them model sentence interactions within a d... | {
"paragraphs": [
[
"In today's day and age of social media, there are ample opportunities for fake news production, dissemination and consumption. BIBREF0 break down fake news into three categories, hoax, propaganda and satire. A hoax article typically tries to convince the reader about a cooked-up story w... | {
"answers": [
{
"annotation_id": [
"598c24c2929238943443984a58fbd9e0979639f0",
"b6b844f14574b159384f4a333a43f4b25bb3edd9"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: 2-way classification results on SLN. *n-fold cross validation (precisi... | {
"caption": [
"Figure 1: TSNE visualization (Van Der Maaten, 2014) of sentence embeddings obtained using BERT (Devlin et al., 2019) for two kind of news articles from SLN. A point denotes a sentence and the number indicates which paragraph it belonged to in the article.",
"Table 1: Statistics about different... | [
"What other evaluation metrics are reported?",
"What out of domain scenarios did they evaluate on?",
"What was their state of the art accuracy score?"
] | [
[
"1910.12203-Experimental Setting-0",
"1910.12203-4-Table3-1.png",
"1910.12203-Results-0",
"1910.12203-4-Table2-1.png"
],
[
"1910.12203-Experimental Setting-1",
"1910.12203-Experimental Setting-2"
],
[
"1910.12203-4-Table3-1.png",
"1910.12203-Results-0",
"1910.12203-4-Ta... | [
"Macro-averaged F1-score, macro-averaged precision, macro-averaged recall",
"In 2-way classification they used LUN-train for training, LUN-test for development and the entire SLN dataset for testing. In 4-way classification they used LUN-train for training and development and LUN-test for testing.",
"In 2-way c... | 355 |
1911.07620 | Exploiting Token and Path-based Representations of Code for Identifying Security-Relevant Commits | Public vulnerability databases such as CVE and NVD account for only 60% of security vulnerabilities present in open-source projects, and are known to suffer from inconsistent quality. Over the last two years, there has been considerable growth in the number of known vulnerabilities across projects available in various ... | {
"paragraphs": [
[
"The use of open-source software has been steadily increasing for some time now, with the number of Java packages in Maven Central doubling in 2018. However, BIBREF0 states that there has been an 88% growth in the number of vulnerabilities reported over the last two years. In order to de... | {
"answers": [
{
"annotation_id": [
"2f570b089e48d89dba45acfc02d76568cb6f74d3",
"caed6b923f2083c5bc87f6a8c5b7cfdda6e2b786"
],
"answer": [
{
"evidence": [
"We modify our model accordingly for every research question, based on changes in the input repr... | {
"caption": [
"Figure 1: A code snippet from Apache Struts with the Javadoc stating which vulnerability was addressed",
"Figure 2: Illustration of our H-CNN model for learning on diff tokens, where the labels are the following: (a) source code diffs from multiple files, (b) stacked token embeddings, (c) conv... | [
"What metrics are used?",
"How long is the dataset?",
"What dataset do they use?"
] | [
[
"1911.07620-Results and Discussion-0",
"1911.07620-6-Table1-1.png"
],
[
"1911.07620-Experimental Setup-1"
],
[
"1911.07620-Experimental Setup-2",
"1911.07620-Experimental Setup-1"
]
] | [
"Accuracy, precision, recall and F1 score.",
"2022",
"Dataset of publicly disclosed vulnerabilities from 205 Java projects from GitHub and 1000 Java repositories from Github"
] | 356 |
1911.03353 | SEPT: Improving Scientific Named Entity Recognition with Span Representation | We introduce a new scientific named entity recognizer called SEPT, which stands for Span Extractor with Pre-trained Transformers. In recent papers, span extractors have been demonstrated to be a powerful model compared with sequence labeling models. However, we discover that with the development of pre-trained language... | {
"paragraphs": [
[
"With the increasing number of scientific publications in the past decades, improving the performance of automatically information extraction in the papers has been a task of concern. Scientific named entity recognition is the key task of information extraction because the overall perfor... | {
"answers": [
{
"annotation_id": [
"1aede180c2016ae5ee3ce9f5e0e91b55e2c325cb",
"7bf4a969884eebe28ab31e8019699b6e9de5cce7"
],
"answer": [
{
"evidence": [
"In the sampling layer, we sample continuous sub-strings from the embedding layer, which is also... | {
"caption": [
"Figure 1: The overview architecture of SEPT. Firstly, feeding the entire abstract into the model and obtain BERT embeddings for each token (word piece). Then, in the training phase, rather than enumerate all spans: (a) For negative spans, we sample them randomly. (b) For ground truths, we keep the... | [
"How much better is performance of SEPT compared to previous state-of-the-art?"
] | [
[
"1911.03353-Experiments ::: Overall performance-1",
"1911.03353-Experiments ::: Overall performance-2",
"1911.03353-3-Table1-1.png"
]
] | [
"SEPT have improvement for Recall 3.9% and F1 1.3% over the best performing baseline (SCIIE(SciBERT))"
] | 357 |
1906.04236 | Identifying Visible Actions in Lifestyle Vlogs | We consider the task of identifying human actions visible in online videos. We focus on the widely spread genre of lifestyle vlogs, which consist of videos of people performing actions while verbally describing them. Our goal is to identify if actions mentioned in the speech description of a video are visually present.... | {
"paragraphs": [
[
"There has been a surge of recent interest in detecting human actions in videos. Work in this space has mainly focused on learning actions from articulated human pose BIBREF7 , BIBREF8 , BIBREF9 or mining spatial and temporal information from videos BIBREF10 , BIBREF11 . A number of reso... | {
"answers": [
{
"annotation_id": [
"de99ba1f5dd95c7f599c3bd2102b643f68ed330f",
"f849ff74adc4aeec5141e394ca91cdcc43119879"
],
"answer": [
{
"evidence": [
"The paper makes three main contributions. First, we introduce a novel dataset consisting of 1,2... | {
"caption": [
"Table 1: Comparison between our dataset and other video human action recognition datasets. # Actions show either the number of action classes in that dataset (for the other datasets), or the number of unique visible actions in that dataset (ours); # Verbs shows the number of unique verbs in the ac... | [
"How many videos did they use?",
"How long are the videos?"
] | [
[
"1906.04236-5-Table3-1.png",
"1906.04236-Introduction-4",
"1906.04236-Visual Action Annotation-5"
],
[
"1906.04236-5-Table3-1.png",
"1906.04236-Visual Action Annotation-5",
"1906.04236-Data Gathering-4"
]
] | [
"177",
"On average videos are 16.36 minutes long"
] | 358 |
1806.07042 | Response Generation by Context-aware Prototype Editing | Open domain response generation has achieved remarkable progress in recent years, but sometimes yields short and uninformative responses. We propose a new paradigm for response generation, that is response generation by editing, which significantly increases the diversity and informativeness of the generation results. ... | {
"paragraphs": [
[
"In recent years, non-task oriented chatbots focused on responding to humans intelligently on a variety of topics, have drawn much attention from both academia and industry. Existing approaches can be categorized into generation-based methods BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBRE... | {
"answers": [
{
"annotation_id": [
"407dc75c485e92d8b5810c9a95d51a892ec08edd"
],
"answer": [
{
"evidence": [
"We evaluate our model on four criteria: fluency, relevance, diversity and originality. We employ Embedding Average (Average), Embedding Extrema (Ex... | {
"caption": [
"Table 1: An example of context-aware prototypes editing. Underlined words mean they do not appear in the original",
"Figure 1: Architecture of our model.",
"Table 2: Automatic evaluation results. Numbers in bold mean that improvement from the model on that metric is statistically significa... | [
"Which dataset do they evaluate on?"
] | [
[
"1806.07042-Introduction-4",
"1806.07042-Evaluation Results-0",
"1806.07042-Experiment setting-0"
]
] | [
"Chinese dataset containing human-human context response pairs collected from Douban Group "
] | 360 |
1912.11637 | Explicit Sparse Transformer: Concentrated Attention Through Explicit Selection | Self-attention based Transformer has demonstrated the state-of-the-art performances in a number of natural language processing tasks. Self-attention is able to model long-term dependencies, but it may suffer from the extraction of irrelevant information in the context. To tackle the problem, we propose a novel model ca... | {
"paragraphs": [
[
"Understanding natural language requires the ability to pay attention to the most relevant information. For example, people tend to focus on the most relevant segments to search for the answers to their questions in mind during reading. However, retrieving problems may occur if irrelevan... | {
"answers": [
{
"annotation_id": [
"6d4fdd063ed5eadea9e9b759663dc5fb432fa725",
"b08d0f8f516da89105bffa8bb3fd246a850ddcbd"
],
"answer": [
{
"evidence": [
"Explicit Sparse Transformer is still based on the Transformer framework. The difference is in t... | {
"caption": [
"Figure 1: Illustration of self-attention in the models. The orange bar denotes the attention score of our proposed model while the blue bar denotes the attention scores of the vanilla Transformer. The orange line denotes the attention between the target word “tim” and the selected top-k positions ... | [
"What do they mean by explicit selection of most relevant segments?",
"What datasets they used for evaluation?"
] | [
[
"1912.11637-Explicit Sparse Transformer-2",
"1912.11637-Introduction-3",
"1912.11637-Explicit Sparse Transformer-1"
],
[
"1912.11637-Results ::: Neural Machine Translation ::: Dataset-0",
"1912.11637-Results ::: Image Captioning ::: Dataset-0",
"1912.11637-Results ::: Language Modeling... | [
"focusing on the top-k segments that contribute the most in terms of correlation to the query",
"For En-De translation the newstest 2014 set from WMT 2014 En-De translation dataset, for En-Vi translation the tst2013 from IWSLT 2015 dataset. and for De-En translation the teste set from IWSLT 2014."
] | 362 |
2002.04326 | ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning | Recent powerful pre-trained language models have achieved remarkable performance on most of the popular datasets for reading comprehension. It is time to introduce more challenging datasets to push the development of this field towards more comprehensive reasoning of text. In this paper, we introduce a new Reading Comp... | {
"paragraphs": [
[
"Machine reading comprehension (MRC) is a fundamental task in Natural Language Processing, which requires models to understand a body of text and answer a particular question related to the context. With success of unsupervised representation learning in NLP, language pre-training based ... | {
"answers": [
{
"annotation_id": [
"4a100f80305bc4d632d96416d83262b20c85f40e",
"f5078c09a30b7cd1c253909a0337e8ae2eca0feb"
],
"answer": [
{
"evidence": [
"We construct a dataset containing 6,138 logical reasoning questions sourced from open websites ... | {
"caption": [
"Figure 1: Performance comparison of state-of-the-art models and humans (graduate students) on EASY and HARD set of ReClor testing set.",
"Table 1: An example in the ReClor dataset which is modified from the Law School Admission Council (2019b).",
"Table 2: Statistics of several multiple-ch... | [
"How are biases identified in the dataset?"
] | [
[
"2002.04326-Experiments ::: Experiments to Find Biased Data-0",
"2002.04326-ReClor Data Collection and Analysis ::: Data Biases in the Dataset-0"
]
] | [
"They identify biases as lexical choice and sentence length for right and wrong answer options in an isolated context, without the question and paragraph context that typically precedes answer options. Lexical choice was identified by calculating per-token correlation scores with \"right\" and \"wrong labels. They ... | 364 |
1911.04128 | A hybrid text normalization system using multi-head self-attention for mandarin | In this paper, we propose a hybrid text normalization system using multi-head self-attention. The system combines the advantages of a rule-based model and a neural model for text preprocessing tasks. Previous studies in Mandarin text normalization usually use a set of hand-written rules, which are hard to improve on ge... | {
"paragraphs": [
[
"Text Normalization (TN) is a process to transform non-standard words (NSW) into spoken-form words (SFW) for disambiguation. In Text-To-Speech (TTS), text normalization is an essential procedure to normalize unreadable numbers, symbols or characters, such as transforming “$20” to “twenty... | {
"answers": [
{
"annotation_id": [
"22f2b4ee7bddbeabf5b0a71426c348f2d61511a6",
"e067b30dfc7a475bd550a7398468b175c7cceedc"
],
"answer": [
{
"evidence": [
"Imbalanced dataset is a challenge for the task because the top patterns are taking too much att... | {
"caption": [
"Fig. 1. Flowchart of the proposed hybrid TN system.",
"Fig. 2. Multi-head self-attention model structure.",
"Table 1. Examples of some dataset pattern rules.",
"Fig. 3. Label distribution for dataset.",
"Table 3. Model performance on the test dataset.",
"Table 4. Model performa... | [
"What models do they compare to?"
] | [
[
"1911.04128-Method ::: Rule-based TN model-0",
"1911.04128-Experiments ::: Model Performance-0"
]
] | [
"six different variations of their multi-head attention model"
] | 365 |
1606.07947 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical approaches. However to reach competitive performance, NMT models need to be exceedingly large. In this paper we consider applying knowledge distillation approaches (Bucila et al., 2006; Hi... | {
"paragraphs": [
[
"Neural machine translation (NMT) BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 is a deep learning-based method for translation that has recently shown promising results as an alternative to statistical approaches. NMT systems directly model the probability of the next word in the target sentenc... | {
"answers": [
{
"annotation_id": [
"36478d2555038291da8f3bcc71c0beb939e37d0d",
"f9c28ed43f00667329aa13dc88edf160ccaf601d"
],
"answer": [
{
"evidence": [
"In this work, we investigate knowledge distillation in the context of neural machine translatio... | {
"caption": [
"Figure 1: Overview of the different knowledge distillation approaches. In word-level knowledge distillation (left) cross entropy is minimized between the student/teacher distributions (yellow) for each word in the actual target sequence (ECD), as well as between the student distribution and the de... | [
"What type of weight pruning do they use?"
] | [
[
"1606.07947-Weight Pruning-1"
]
] | [
"Prune %x of the parameters by removing the weights with the lowest absolute values."
] | 367 |
1603.00957 | Question Answering on Freebase via Relation Extraction and Textual Evidence | Existing knowledge-based question answering systems often rely on small annotated training data. While shallow methods like relation extraction are robust to data scarcity, they are less expressive than the deep meaning representation methods like semantic parsing, thereby failing at answering questions involving multi... | {
"paragraphs": [
[
"Since the advent of large structured knowledge bases (KBs) like Freebase BIBREF0 , YAGO BIBREF1 and DBpedia BIBREF2 , answering natural language questions using those structured KBs, also known as KB-based question answering (or KB-QA), is attracting increasing research efforts from bot... | {
"answers": [
{
"annotation_id": [
"f6b4737b5920af299e52e7813ed51ce21f83139f",
"d3434912ff20486205a195a31968af9908e55c48"
],
"answer": [
{
"evidence": [
"In this section we introduce the experimental setup, the main results and detailed analysis of ... | {
"caption": [
"Figure 1: An illustration of our method to find answers for the given question who did shaq first play for.",
"Figure 2: Overview of the multi-channel convolutional neural network for relation extraction. We is the word embedding matrix, W1 is the convolution matrix, W2 is the activation matri... | [
"What baselines is the neural relation extractor compared to?",
"What additional evidence they use?",
"How much improvement they get from the previous state-of-the-art?",
"What is the previous state-of-the-art?"
] | [
[
"1603.00957-6-Table1-1.png",
"1603.00957-Results and Discussion-5",
"1603.00957-Relation Extraction-0"
],
[
"1603.00957-Introduction-4",
"1603.00957-Introduction-3",
"1603.00957-Our Method-0"
],
[
"1603.00957-6-Table1-1.png"
],
[
"1603.00957-6-Table1-1.png",
"1603.0... | [
"Berant et al. (2013), Yao and Van Durme (2014), Xu et al. (2014), Berant and Liang (2014), Bao et al. (2014), Border et al. (2014), Dong et al. (2015), Yao (2015), Bast and Haussmann (2015), Berant and Liang (2015), Reddy et al. (2016), Yih et al. (2015)",
"Wikipedia sentences that validate or support KB facts",... | 368 |
1804.08000 | Fine-grained Entity Typing through Increased Discourse Context and Adaptive Classification Thresholds | Fine-grained entity typing is the task of assigning fine-grained semantic types to entity mentions. We propose a neural architecture which learns a distributional semantic representation that leverages a greater amount of semantic context -- both document and sentence level information -- than prior work. We find that ... | {
"paragraphs": [
[
"Named entity typing is the task of detecting the type (e.g., person, location, or organization) of a named entity in natural language text. Entity type information has shown to be useful in natural language tasks such as question answering BIBREF0 , knowledge-base population BIBREF1 , B... | {
"answers": [
{
"annotation_id": [
"1e11c26fe1f3b8160a264541e4fea91b4a2faaea",
"e897d47189bb3ea21c0ca8b212481885c1540df4"
],
"answer": [
{
"evidence": [
"General Model",
"Given a type embedding vector INLINEFORM0 and a featurizer INLINEF... | {
"caption": [
"Figure 1: Neural architecture for predicting the types of entity mention “Monopoly” in the text “... became a top seller ... Monopoly is played in 114 countries. ...”. Part of document-level context is omitted.",
"Table 1: Statistics of the datasets.",
"Table 2: Examples showing the improv... | [
"What is the architecture of the model?",
"What fine-grained semantic types are considered?"
] | [
[
"1804.08000-1-Figure1-1.png"
],
[
"1804.08000-5-Table6-1.png"
]
] | [
"Document-level context encoder, entity and sentence-level context encoders with common attention, then logistic regression, followed by adaptive thresholds.",
"/other/event/accident, /person/artist/music, /other/product/mobile phone, /other/event/sports event, /other/product/car"
] | 369 |
1908.05803 | Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning | Machine comprehension of texts longer than a single sentence often requires coreference resolution. However, most current reading comprehension benchmarks do not contain complex coreferential phenomena and hence fail to evaluate the ability of models to resolve coreference. We present a new crowdsourced dataset contain... | {
"paragraphs": [
[
"Paragraphs and other longer texts typically make multiple references to the same entities. Tracking these references and resolving coreference is essential for full machine comprehension of these texts. Significant progress has recently been made in reading comprehension research, due t... | {
"answers": [
{
"annotation_id": [
"6189f8f5ea579ed90b2a27b72aef3d6ac6a0d2e5",
"d8755dd39ac935d43dbcae8fad14d365822acdb6"
],
"answer": [
{
"evidence": [
"We crowdsourced questions about these paragraphs on Mechanical Turk. We asked workers to find t... | {
"caption": [
"Figure 1: Example paragraph and questions from the dataset. Highlighted text in paragraphs is where the questions with matching highlights are anchored. Next to the questions are the relevant coreferent mentions from the paragraph. They are bolded for the first question, italicized for the second,... | [
"What is the strong baseline model used?"
] | [
[
"1908.05803-Dataset Construction ::: Crowdsourcing setup-0",
"1908.05803-3-Table3-1.png"
]
] | [
"Passage-only heuristic baseline, QANet, QANet+BERT, BERT QA"
] | 370 |
1910.02677 | Controllable Sentence Simplification | Text simplification aims at making a text easier to read and understand by simplifying grammar and structure while keeping the underlying information identical. It is often considered an all-purpose generic task where the same simplification is suitable for all; however multiple audiences can benefit from simplified te... | {
"paragraphs": [
[
"In Natural Language Processing, the Text Simplification task aims at making a text easier to read and understand. Text simplification can be beneficial for people with cognitive disabilities such as aphasia BIBREF0, dyslexia BIBREF1 and autism BIBREF2 but also for second language learne... | {
"answers": [
{
"annotation_id": [
"981d92561bcace231b04d92adf28b52e9cadab5c",
"dcffc4b7eec54d1c17e99ceb22f55bc13e8a06ec"
],
"answer": [
{
"evidence": [
"Our models are trained and evaluated on the WikiLarge dataset BIBREF10 which contains 296,402/2... | {
"caption": [
"Table 1: Example of parametrization on the number of characters. Here the source and target simplifications respectively contain 71 and 22 characters which gives a compression ratio of 0.3. We prepend the <NbChars 0.3> token to the source sentence. Similarly, the Levenshtein similarity between the... | [
"What are the baseline models?"
] | [
[
"1910.02677-Introduction-2",
"1910.02677-Experiments ::: Overall Performance-12",
"1910.02677-Experiments ::: Overall Performance-4",
"1910.02677-Experiments ::: Overall Performance-10",
"1910.02677-Related Work ::: Sentence Simplification-3",
"1910.02677-Experiments ::: Overall Performanc... | [
"PBMT-R, Hybrid, SBMT+PPDB+SARI, DRESS-LS, Pointer+Ent+Par, NTS+SARI, NSELSTM-S and DMASS+DCSS"
] | 371 |
1909.08211 | Modeling Conversation Structure and Temporal Dynamics for Jointly Predicting Rumor Stance and Veracity | Automatically verifying rumorous information has become an important and challenging task in natural language processing and social media analytics. Previous studies reveal that people's stances towards rumorous messages can provide indicative clues for identifying the veracity of rumors, and thus determining the stanc... | {
"paragraphs": [
[
"Social media websites have become the main platform for users to browse information and share opinions, facilitating news dissemination greatly. However, the characteristics of social media also accelerate the rapid spread and dissemination of unverified information, i.e., rumors BIBREF... | {
"answers": [
{
"annotation_id": [
"964f58b459fd91591fa78ec84cc006a3e07def5a",
"f8c695fa9c87103a74fb32a9ef8f02848ed0a555"
],
"answer": [
{
"evidence": [
"The first is SemEval-2017 task 8 BIBREF16 dataset. It includes 325 rumorous conversation thread... | {
"caption": [
"Figure 1: A conversation thread discussing the rumorous tweet “1”. Three different perspectives for learning the stance feature of the reply tweet “2” are illustrated.",
"Figure 2: Stance distributions of tweets discussing true rumors, false rumors, and unverified rumors, respectively (Better ... | [
"How do they split the dataset when training and evaluating their models?",
"How much improvement does their model yield over previous methods?"
] | [
[
"1909.08211-Experiments ::: Data & Evaluation Metric-1",
"1909.08211-Experiments ::: Data & Evaluation Metric-2"
],
[
"1909.08211-6-Table2-1.png",
"1909.08211-7-Table3-1.png"
]
] | [
"SemEval-2017 task 8 dataset is split into train, development and test sets. Two events go into test set and eight events go to train and development sets for every thread in the dataset. PHEME dataset is split as leave-one-event-out cross-validation. One event goes to test and the rest of events go to training set... | 373 |
2001.05540 | Insertion-Deletion Transformer | We propose the Insertion-Deletion Transformer, a novel transformer-based neural architecture and training method for sequence generation. The model consists of two phases that are executed iteratively, 1) an insertion phase and 2) a deletion phase. The insertion phase parameterizes a distribution of insertions on the c... | {
"paragraphs": [
[
"Neural sequence models BIBREF0, BIBREF1 typically generate outputs in an autoregressive left-to-right manner. These models have been successfully applied to a range of task, for example machine translation BIBREF2. They often rely on an encoder that processes the source sequence, and a ... | {
"answers": [
{
"annotation_id": [
"3abf3b8fd8647ab9bb506667148bd49ab35bab78",
"66ed26eafa0c9053eb8edd5eb01b20591e96d8ad"
],
"answer": [
{
"evidence": [
"The shifted alphabetic sequence task should be trivial to solve for a powerful sequence to sequ... | {
"caption": [
"Table 3: BLEU scores for the Caesar’s cipher task.",
"Figure 1: Insertion-Deletion Transformer; reads from bottom to top. The bottom row are the source and target sequence, as sampled according to step 1 and 2 in Section 2.1. These are passed through the models to create an output sequence. [C... | [
"How much is BELU score difference between proposed approach and insertion-only method?"
] | [
[
"2001.05540-Experiments ::: Learning shifted alphabetic sequences-3",
"2001.05540-Experiments ::: Learning Caesar's Cipher-3",
"2001.05540-4-Table2-1.png",
"2001.05540-3-Table3-1.png"
]
] | [
"Learning shifted alphabetic sequences: 21.34\nCaesar's Cipher: 2.02"
] | 375 |
1805.00195 | An Annotated Corpus for Machine Reading of Instructions in Wet Lab Protocols | We describe an effort to annotate a corpus of natural language instructions consisting of 622 wet lab protocols to facilitate automatic or semi-automatic conversion of protocols into a machine-readable format and benefit biological research. Experimental results demonstrate the utility of our corpus for developing mach... | {
"paragraphs": [
[
"As the complexity of biological experiments increases, there is a growing need to automate wet laboratory procedures to avoid mistakes due to human error and also to enhance the reproducibility of experimental biological research BIBREF0 . Several efforts are currently underway to defin... | {
"answers": [
{
"annotation_id": [
"90f2a35726b939353f74bc813fe072d93703d3fe",
"c999efe2a60871c4c96dd94cb732e30c1d9a2ecd"
],
"answer": [
{
"evidence": [
"Our final corpus consists of 622 protocols annotated by a team of 10 annotators. Corpus statist... | {
"caption": [
"Figure 2: Example sentences (#5 and #6) from the lab protocol in Figure 1 as shown in the BRAT annotation interface.",
"Figure 3: An action graph can be directly derived from annotations as seen in Figure 2 (example sentence #6) .",
"Table 1: Statistics of our Wet Lab Protocol Corpus by pr... | [
"what ML approaches did they experiment with?"
] | [
[
"1805.00195-4-Table4-1.png",
"1805.00195-Entity Identification and Classification-0",
"1805.00195-Methods-0"
]
] | [
"MaxEnt, BiLSTM, BiLSTM+CRF"
] | 376 |
1612.02695 | Towards better decoding and language model integration in sequence to sequence models | The recently proposed Sequence-to-Sequence (seq2seq) framework advocates replacing complex data processing pipelines, such as an entire automatic speech recognition system, with a single neural network trained in an end-to-end fashion. In this contribution, we analyse an attention-based seq2seq speech recognition syste... | {
"paragraphs": [
[
"Deep learning BIBREF0 has led to many breakthroughs including speech and image recognition BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . A subfamily of deep models, the Sequence-to-Sequence (seq2seq) neural networks have proved to be very successful on complex transduction... | {
"answers": [
{
"annotation_id": [
"1eede96d33534ed67dde5406c2dbce17ff4a778a",
"fed5916c556a02d4a391df7c2fe1543a196de83c"
],
"answer": [
{
"evidence": [
"To emit a character the speller uses the attention mechanism to find a set of relevant activati... | {
"caption": [
"Figure 1: Influence of beam width and SoftMax temperature on decoding accuracy. In the baseline case (no label smoothing) increasing the temperature reduces the error rate. When label smoothing is used the next-character prediction improves, as witnessed by WER for beam size=1, and tuning the temp... | [
"What are the solutions proposed for the seq2seq shortcomings?"
] | [
[
"1612.02695-Solutions to Partial Transcripts Problem-2",
"1612.02695-Label Smoothing Prevents Overconfidence-0"
]
] | [
"label smoothing, use of coverage"
] | 377 |
2002.04745 | On Layer Normalization in the Transformer Architecture | The Transformer is widely used in natural language processing tasks. To train a Transformer however, one usually needs a carefully designed learning rate warm-up stage, which is shown to be crucial to the final performance but will slow down the optimization and bring more hyper-parameter tunings. In this paper, we fir... | {
"paragraphs": [
[
"The Transformer BIBREF0 is one of the most commonly used neural network architectures in natural language processing. Layer normalization BIBREF1 plays a key role in Transformer's success. The originally designed Transformer places the layer normalization between the residual blocks, wh... | {
"answers": [
{
"annotation_id": [
"7de04e6caed80b1342de0d76c04907e4f15cba86",
"c91a9e7561de0ba98d445479ed693367bd922265"
],
"answer": [
{
"evidence": [
"We record validation loss of the model checkpoints and plot them in Figure FIGREF47. Similar to... | {
"caption": [
"Figure 1. (a) Post-LN Transformer layer; (b) Pre-LN Transformer layer.",
"Table 1. Post-LN Transformer v.s. Pre-LN Transformer",
"Figure 2. Performances of the models optimized by Adam and SGD on the IWSLT14 De-En task.",
"Figure 3. The norm of gradients of 1. different layers in the 6... | [
"What experiments do they perform?"
] | [
[
"2002.04745-Experiments ::: Experiment Settings ::: Unsupervised Pre-training (BERT)-0",
"2002.04745-Optimization for the Transformer ::: The learning rate warm-up stage ::: Experimental setting-0",
"2002.04745-Experiments ::: Experiment Settings ::: Machine Translation-1",
"2002.04745-Experiments... | [
"whether the learning rate warm-up stage is essential, whether the final model performance is sensitive to the value of Twarmup."
] | 378 |
1907.12984 | DuTongChuan: Context-aware Translation Model for Simultaneous Interpreting | In this paper, we present DuTongChuan, a novel context-aware translation model for simultaneous interpreting. This model allows to constantly read streaming text from the Automatic Speech Recognition (ASR) model and simultaneously determine the boundaries of Information Units (IUs) one after another. The detected IU is... | {
"paragraphs": [
[
"Recent progress in Automatic Speech Recognition (ASR) and Neural Machine Translation (NMT), has facilitated the research on automatic speech translation with applications to live and streaming scenarios such as Simultaneous Interpreting (SI). In contrast to non-real time speech translat... | {
"answers": [
{
"annotation_id": [
"43889e492b0a0576442f3f33538981e7e9cbdbf1",
"5d025eecd561d49893ea589f69c5a861146b3c5f"
],
"answer": [
{
"evidence": [
"We use a subset of the data available for NIST OpenMT08 task . The parallel training corpus con... | {
"caption": [
"Figure 1: For this sentence, a full-sentence NMT model produces an appropriate translation with, however, a long latency in the context of simultaneous translation, as it needs to wait until the end of the full sentence to start translating. In contrast, a sub-sentence NMT model outputs a translat... | [
"Does larger granularity lead to better translation quality?"
] | [
[
"1907.12984-Experiments-7",
"1907.12984-Experiments-3"
]
] | [
"It depends on the model used."
] | 379 |
1911.09247 | How to Ask Better Questions? A Large-Scale Multi-Domain Dataset for Rewriting Ill-Formed Questions | We present a large-scale dataset for the task of rewriting an ill-formed natural language question to a well-formed one. Our multi-domain question rewriting MQR dataset is constructed from human contributed Stack Exchange question edit histories. The dataset contains 427,719 question pairs which come from 303 domains. ... | {
"paragraphs": [
[
"Understanding text and voice questions from users is a difficult task as it involves dealing with “word salad” and ill-formed text. Ill-formed questions may arise from imperfect speech recognition systems, search engines, dialogue histories, inputs from low bandwidth devices such as mob... | {
"answers": [
{
"annotation_id": [
"9f42cddbbb09e8b052dd10af8dfcd4a8666430af",
"aaedcbd8fc30914fd762e506265cadd71885ae2f"
],
"answer": [
{
"evidence": [
"We use 303 sub areas from Stack Exchange data dumps. The full list of area names is in the appe... | {
"caption": [
"Table 1: Examples of pairs of ill-formed and well-formed questions from the MQR dataset.",
"Table 2: Examples given to annotators for binary question quality scores.",
"Table 3: Example question pairs given to annotators to judge semantic equivalence.",
"Table 4: Summary of manual anno... | [
"What characterizes the 303 domains? e.g. is this different subject tags?"
] | [
[
"1911.09247-MQR Dataset Construction and Analysis ::: Dataset Domains-0",
"1911.09247-MQR Dataset Construction and Analysis-1",
"1911.09247-MQR Dataset Construction and Analysis ::: Dataset Domains-2"
]
] | [
"The domains represent different subfields related to the topic of the questions. "
] | 380 |
2003.12660 | Towards Supervised and Unsupervised Neural Machine Translation Baselines for Nigerian Pidgin | Nigerian Pidgin is arguably the most widely spoken language in Nigeria. Variants of this language are also spoken across West and Central Africa, making it a very important language. This work aims to establish supervised and unsupervised neural machine translation (NMT) baselines between English and Nigerian Pidgin. W... | {
"paragraphs": [
[
"Over 500 languages are spoken in Nigeria, but Nigerian Pidgin is the uniting language in the country. Between three and five million people are estimated to use this language as a first language in performing their daily activities. Nigerian Pidgin is also considered a second language t... | {
"answers": [
{
"annotation_id": [
"6a63288879b719c863f850192f040985095ea9e7",
"9c549e2d144726d2318a45d693b6e894210b5ae2"
],
"answer": [
{
"evidence": [
"The dataset used for the supervised was obtained from the JW300 large-scale, parallel corpus fo... | {
"caption": [
"Table 3: Unsupervised (Word-Level) Results from English to Nigerian Pidgin",
"Table 4: Supervised (Word-Level) Results from English to Nigerian Pidgin",
"Table 5: Supervised (Byte Pair Encoding) Results from English to Nigerian Pidgin",
"Table 6: Unsupervised (Word-Level) Results from ... | [
"How long is their dataset?",
"What is the best performing system?"
] | [
[
"2003.12660-Methodology ::: Dataset-0"
],
[
"2003.12660-Results ::: Quantitative-2",
"2003.12660-Results ::: Quantitative-3"
]
] | [
"Data used has total of 23315 sentences.",
"In English to Pidgin best was byte pair encoding tokenization superised model, while in Pidgin to English word-level tokenization supervised model was the best."
] | 381 |
1610.00479 | Nonsymbolic Text Representation | We introduce the first generic text representation model that is completely nonsymbolic, i.e., it does not require the availability of a segmentation or tokenization method that attempts to identify words or other symbolic units in text. This applies to training the parameters of the model on a training corpus as well ... | {
"paragraphs": [
[
"Character-level models can be grouped into three classes. (i) End-to-end models learn a separate model on the raw character (or byte) input for each task; these models estimate task-specific parameters, but no representation of text that would be usable across tasks is computed. Through... | {
"answers": [
{
"annotation_id": [
"22f0550ef7af638f50ec0ac75d1557c9b257c1d4",
"3ad20616b18703df613acf108a20edf17221fa72"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
... | {
"caption": [
"Table 1: String operations that on average do not change meaning. “@” stands for space. ‡ is the left or right boundary of the ngram.",
"Table 2: Left: Evaluation results for named entity typing. Right: Neighbors of character ngrams. Rank r = 1/r = 2: nearest / second-nearest neighbor.",
"... | [
"By how much do they outpeform existing text denoising models?"
] | [
[
"1610.00479-6-Table2-1.png",
"1610.00479-Experiments-5",
"1610.00479-Experiments-6"
]
] | [
"Answer with content missing: (Table 4) Mean reciprocal rank of proposed model is 0.76 compared to 0.64 of bag-of-ngrams."
] | 384 |
1905.01347 | Auditing ImageNet: Towards a Model-driven Framework for Annotating Demographic Attributes of Large-Scale Image Datasets | The ImageNet dataset ushered in a flood of academic and industry interest in deep learning for computer vision applications. Despite its significant impact, there has not been a comprehensive investigation into the demographic attributes of images contained within the dataset. Such a study could lead to new insights on... | {
"paragraphs": [
[
"ImageNet BIBREF0 , released in 2009, is a canonical dataset in computer vision. ImageNet follows the WordNet lexical database of English BIBREF1 , which groups words into synsets, each expressing a distinct concept. ImageNet contains 14,197,122 images in 21,841 synsets, collected throug... | {
"answers": [
{
"annotation_id": [
"5b073db56320099a0734eb0679d8cb5c7d06dd08",
"b298e168287702dd8c2a319698c93cde03e6e904"
],
"answer": [
{
"evidence": [
"In order to provide demographic annotations at scale, there exist two feasible methods: crowdso... | {
"caption": [
"Table 2. Gender-biased Synsets, ILSVRC 2012 ImageNet Subset",
"Table 3. Top-level Statistics of ImageNet ‘person’ Subset",
"Table 4. Gender-biased Synsets, ImageNet ‘person’ Subset"
],
"file": [
"3-Table2-1.png",
"3-Table3-1.png",
"3-Table4-1.png"
]
} | [
"How do they determine demographics on an image?",
"What is the most underrepresented person group in ILSVRC?"
] | [
[
"1905.01347-Gender Annotation-0",
"1905.01347-Apparent Age Annotation-0",
"1905.01347-Methodology-0",
"1905.01347-Introduction-4",
"1905.01347-Face Detection-0"
],
[
"1905.01347-Results-0",
"1905.01347-3-Table2-1.png"
]
] | [
"using model driven face detection, apparent age annotation and gender annotation",
"Females and males with age 75+"
] | 385 |
1711.04964 | Dynamic Fusion Networks for Machine Reading Comprehension | This paper presents a novel neural model - Dynamic Fusion Network (DFN), for machine reading comprehension (MRC). DFNs differ from most state-of-the-art models in their use of a dynamic multi-strategy attention process, in which passages, questions and answer candidates are jointly fused into attention vectors, along w... | {
"paragraphs": [
[
"The goal of Machine Reading Comprehension (MRC) is to have machines read a text passage and then generate an answer (or select an answer from a list of given candidates) for any question about the passage. There has been a growing interest in the research community in exploring neural M... | {
"answers": [
{
"annotation_id": [
"20cb93d5d46a050a690a36186d3e2b407eadb696",
"ec5cf556a73f497a071d8af1b7c844c02d290c4b"
],
"answer": [
{
"evidence": [
"Table 3 shows a comparison between DFN and a few previously proposed models. All models were tr... | {
"caption": [
"Figure 1: Above: Questions from English exams. Below: Questions from SQuAD.",
"Table 1: Percentage of questions in each dataset that requires Single-sentence Inference(SI) and Multi-sentence Inference(MI), from (Lai et al., 2017).",
"Figure 2: Left: Examples from RACE. Right: Examples from... | [
"How much improvement is given on RACE by their introduced approach?"
] | [
[
"1711.04964-Model Performance-0",
"1711.04964-Ablation Studies-6"
]
] | [
"7.3% on RACE-M and 1.5% on RACE-H"
] | 389 |
1611.01116 | Binary Paragraph Vectors | Recently Le&Mikolov described two log-linear models, called Paragraph Vector, that can be used to learn state-of-the-art distributed representations of documents. Inspired by this work, we present Binary Paragraph Vector models: simple neural networks that learn short binary codes for fast information retrieval. We sho... | {
"paragraphs": [
[
"One of the significant challenges in contemporary information processing is the sheer volume of available data. BIBREF0 , for example, claim that the amount of digital data in the world doubles every two years. This trend underpins efforts to develop algorithms that can efficiently sear... | {
"answers": [
{
"annotation_id": [
"36ab3da79c1c3e3776b3389aeaaf99bcac18c1cb",
"3e05e4269b5e07840d8567804560753ed5d3ae52"
],
"answer": [
{
"evidence": [
"In this work we present Binary Paragraph Vector models, an extensions to PV-DBOW and PV-DM that... | {
"caption": [
"Figure 1: The Binary PV-DBOW model. Modifications to the original PV-DBOW model are highlighted.",
"Figure 2: The Real-Binary PV-DBOW model. Modifications to the original PV-DBOW model are highlighted.",
"Figure 3: The Binary PV-DM model. Modifications to the original PV-DM model are highl... | [
"How do they show that binary paragraph vectors capture semantics?"
] | [
[
"1611.01116-Experiments-0",
"1611.01116-Transfer learning-0"
]
] | [
"They perform information-retrieval tasks on popular benchmarks"
] | 391 |
1908.10001 | Real-World Conversational AI for Hotel Bookings | In this paper, we present a real-world conversational AI system to search for and book hotels through text messaging. Our architecture consists of a frame-based dialogue management system, which calls machine learning models for intent classification, named entity recognition, and information retrieval subtasks. Our ch... | {
"paragraphs": [
[
"Task-oriented chatbots have recently been applied to many areas in e-commerce. In this paper, we describe a task-oriented chatbot system that provides hotel recommendations and deals. Users access the chatbot through third-party messaging platforms, such as Facebook Messenger (Figure FI... | {
"answers": [
{
"annotation_id": [
"45cbac6b761799f2f93a16c2234e78a905d2c44f",
"d0c3494980ecc752f5012780ee00e02c1587b891"
],
"answer": [
{
"evidence": [
"We also implement a rule-based unigram matching baseline, which takes the entry with highest un... | {
"caption": [
"Fig. 1. Screenshot of a typical conversation with our bot in Facebook Messenger.",
"TABLE I SOME INTENT CLASSES PREDICTED BY OUR MODEL.",
"Fig. 2. The intent model determines for each incoming message, whether the bot can respond adequately. If the message cannot be recognized as one of ou... | [
"How is their NER model trained?",
"How well does the system perform?",
"Where does their information come from?"
] | [
[
"1908.10001-Models ::: Named entity recognition-4"
],
[
"1908.10001-3-TableII-1.png",
"1908.10001-4-TableIII-1.png"
],
[
"1908.10001-Chatbot architecture-2",
"1908.10001-Chatbot architecture ::: Data labelling-0",
"1908.10001-Chatbot architecture ::: Data labelling-1",
"1908.10... | [
"Trained using SpaCy and fine-tuned with their data of hotel and location entities",
"F1 score of 0.96 on recognizing both hotel and location entities and Top-1 recall of 0.895 with the IR BERT model",
"Information from users and information from database of approximately 100,000 cities and 300,000 hotels, po... | 395 |
1710.10380 | Speeding up Context-based Sentence Representation Learning with Non-autoregressive Convolutional Decoding | Context plays an important role in human language understanding, thus it may also be useful for machines learning vector representations of language. In this paper, we explore an asymmetric encoder-decoder structure for unsupervised context-based sentence representation learning. We carefully designed experiments to sh... | {
"paragraphs": [
[
"Learning distributed representations of sentences is an important and hard topic in both the deep learning and natural language processing communities, since it requires machines to encode a sentence with rich language content into a fixed-dimension vector filled with real numbers. Our ... | {
"answers": [
{
"annotation_id": [
"44af3d7690e3a896e074292d95e02396c0cc40be",
"f48678483981e059daef2577621d06a629b50d81"
],
"answer": [
{
"evidence": [
"The downstream tasks for evaluation include semantic relatedness (SICK, BIBREF17 ), paraphrase ... | {
"caption": [
"Figure 1: Our proposed model is composed of an RNN encoder, and a CNN decoder. During training, a batch of sentences are sent to the model, and the RNN encoder computes a vector representation for each of sentences; then the CNN decoder needs to reconstruct the paired target sequence, which contai... | [
"How long are the two unlabelled corpora?"
] | [
[
"1710.10380-Experiment Settings-2"
]
] | [
"71000000, 142000000"
] | 397 |
1911.11025 | Women, politics and Twitter: Using machine learning to change the discourse | Including diverse voices in political decision-making strengthens our democratic institutions. Within the Canadian political system, there is gender inequality across all levels of elected government. Online abuse, such as hateful tweets, leveled at women engaged in politics contributes to this inequity, particularly t... | {
"paragraphs": [
[
"Our political systems are unequal, and we suffer for it. Diversity in representation around decision-making tables is important for the health of our democratic institutions BIBREF0. One example of this inequity of representation is the gender disparity in politics: there are fewer wome... | {
"answers": [
{
"annotation_id": [
"538c485c136354df717bf8213dcec2e7fb527c64",
"95a27af0ea105cb48e92f8da869a654178bab9f8"
],
"answer": [
{
"evidence": [
"We collect tweets from Twitter's real-time streaming API. The stream listener uses the open-sou... | {
"caption": [
"Figure 1: Visualizing the training data distribution. Relative frequency of hateful versus not hateful tweets for varying levels of the Perspective API [17] TOXICITY score. Normalized histograms are plotted underneath kernel density estimation (KDE) plots.",
"Figure 2: 10-fold cross validation... | [
"Where do the supportive tweets about women come from? Are they automatically or manually generated?"
] | [
[
"1911.11025-Methods ::: Collecting Twitter handles, predicting candidate gender, curating “positivitweets”-1"
]
] | [
"Manualy (volunteers composed them)"
] | 398 |
1708.07252 | A Study on Neural Network Language Modeling | An exhaustive study on neural network language modeling (NNLM) is performed in this paper. Different architectures of basic neural network language models are described and examined. A number of different improvements over basic neural network language models, including importance sampling, word classes, caching and bi... | {
"paragraphs": [
[
"Generally, a well-designed language model makes a critical difference in various natural language processing (NLP) tasks, like speech recognition BIBREF0 , BIBREF1 , machine translation BIBREF2 , BIBREF3 , semantic extraction BIBREF4 , BIBREF5 and etc. Language modeling (LM), therefore,... | {
"answers": [
{
"annotation_id": [
"a6b5b6abbdfcb39e1e2daa0e4f525e12b5551ef4",
"faa6db19dcaab95ad5b9d01c0e855c3332582984"
],
"answer": [
{
"evidence": [
"In BIBREF33 , significant improvement on neural machine translation (NMT) for an English to Fre... | {
"caption": [
"Figure 1: Feed-forward neural network language model",
"Figure 2: Recurrent neural network language model",
"Table 1: Camparative results of different neural network language models",
"Figure 3: Architecture of class based LSTM-RNNLM",
"Table 2: Results for class-based models",
... | [
"What directions are suggested to improve language models?"
] | [
[
"1708.07252-Future Work-0",
"1708.07252-Future Work-1"
]
] | [
"Improved architecture for ANN, use of linguistical properties of words or sentences as features."
] | 400 |
1808.07733 | Revisiting the Importance of Encoding Logic Rules in Sentiment Classification | We analyze the performance of different sentiment classification models on syntactically complex inputs like A-but-B sentences. The first contribution of this analysis addresses reproducible research: to meaningfully compare different models, their accuracies must be averaged over far more random seeds than what has tr... | {
"paragraphs": [
[
"In this paper, we explore the effectiveness of methods designed to improve sentiment classification (positive vs. negative) of sentences that contain complex syntactic structures. While simple bag-of-words or lexicon-based methods BIBREF1 , BIBREF2 , BIBREF3 achieve good performance on ... | {
"answers": [
{
"annotation_id": [
"d964ee252b07c12511f632517c13ea10086ed016",
"e9121c250e2cedee1a7acc8731eb2aa905bef292"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Average performance (across 100 seeds) of ELMo on the SST2 task. We sh... | {
"caption": [
"Figure 1: Variation in models trained on SST-2 (sentenceonly). Accuracies of 100 randomly initialized models are plotted against the number of epochs of training (in gray), along with their average accuracies (in red, with 95% confidence interval error bars). The inset density plot shows the distr... | [
"What logic rules can be learned using ELMo?"
] | [
[
"1808.07733-4-Table2-1.png",
"1808.07733-Contextualized Word Embeddings-2"
]
] | [
"1).But 2).Eng 3). A-But-B"
] | 401 |
1912.04979 | Advances in Online Audio-Visual Meeting Transcription | This paper describes a system that generates speaker-annotated transcripts of meetings by using a microphone array and a 360-degree camera. The hallmark of the system is its ability to handle overlapped speech, which has been an unsolved problem in realistic settings for over a decade. We show that this problem can be ... | {
"paragraphs": [
[
"",
"The goal of meeting transcription is to have machines generate speaker-annotated transcripts of natural meetings based on their audio and optionally video recordings. Meeting transcription and analytics would be a key to enhancing productivity as well as improving accessibilit... | {
"answers": [
{
"annotation_id": [
"678907d17834bcdac0f0710a6b4a06edae1ff0fc",
"d9d930ea80bc9a6c7eb47a6063e7d9aab519590b"
],
"answer": [
{
"evidence": [
"Our vision processing module (see Fig. FIGREF1) locates and identifies all persons in a room fo... | {
"caption": [
"Fig. 1. Processing flow diagram of SRD framework for two stream configuration. To run the whole system online, the video processing and SR modules are assigned their own dedicated resources. WPE: weighted prediction error minimization for dereverberation. CSS: continuous speech separation. SR: spe... | [
"Are face tracking, identification, localization etc multimodal inputs in some ML model or system is programmed by hand?",
"What are baselines used?"
] | [
[
"1912.04979-Speaker Diarization-7",
"1912.04979-Speaker Diarization ::: Face tracking and identification-2",
"1912.04979-Speaker Diarization ::: Sound source localization-1",
"1912.04979-Speaker Diarization ::: Face tracking and identification-4",
"1912.04979-Speaker Diarization-13",
"1912... | [
"Input in ML model",
"The baseline system was a conventional speech recognition approach using single-output beamforming."
] | 403 |
1712.00733 | Incorporating External Knowledge to Answer Open-Domain Visual Questions with Dynamic Memory Networks | Visual Question Answering (VQA) has attracted much attention since it offers insight into the relationships between the multi-modal analysis of images and natural language. Most of the current algorithms are incapable of answering open-domain questions that require to perform reasoning beyond the image contents. To add... | {
"paragraphs": [
[
"Visual Question Answering (VQA) is a ladder towards a better understanding of the visual world, which pushes forward the boundaries of both computer vision and natural language processing. A system in VQA tasks is given a text-based question about an image, which is expected to generate... | {
"answers": [
{
"annotation_id": [
"62586625f55ac7e9d3273b01967711c882f71cc5",
"9b9c11f4510e0e214add684e185578da0e84bb31",
"eb558d610df82d74f297adaf1a571b175b02dd28"
],
"answer": [
{
"evidence": [
"We also compare our method with several alt... | {
"caption": [
"Figure 1: A real case of open-domain visual question answering based on internal representation of an image and external knowledge. Recent success of deep learning provides a good opportunity to implement the closed-domain VQAs, but it is incapable of answering open-domain questions when external ... | [
"What are the baselines for this paper?",
"What VQA datasets are used for evaluating this task? ",
"How do they model external knowledge? ",
"What type of external knowledge has been used for this paper? "
] | [
[
"1712.00733-Implementation Details-2",
"1712.00733-Implementation Details-5",
"1712.00733-Implementation Details-3",
"1712.00733-Implementation Details-6",
"1712.00733-Implementation Details-4"
],
[
"1712.00733-Experiments-0",
"1712.00733-Datasets-0"
],
[
"1712.00733-Overvi... | [
"LSTM with attention, memory augmented model, ",
"Visual7W and an automatically constructed open-domain VQA dataset",
"Word embeddings from knowledge triples (subject, rel, object) from ConceptNet are fed to an RNN",
"ConceptNet, which contains common-sense relationships between daily words"
] | 404 |
1905.07894 | Abusive Language Detection in Online Conversations by Combining Content-and Graph-based Features | In recent years, online social networks have allowed worldwide users to meet and discuss. As guarantors of these communities, the administrators of these platforms must prevent users from adopting inappropriate behaviors. This verification task, mainly done by humans, is more and more difficult due to the ever growing ... | {
"paragraphs": [
[
"In recent years, online social networks have allowed world-wide users to meet and discuss. As guarantors of these communities, the administrators of these platforms must prevent users from adopting inappropriate behaviors. This verification task, mainly done by humans, is more and more ... | {
"answers": [
{
"annotation_id": [
"ba7b83588c27681325033ae8203679dbd1280336",
"ebba2332480b3e24329369c115b1c971c2e6845d"
],
"answer": [
{
"evidence": [
"In this paper, based on the assumption that the interactions between users and the content of t... | {
"caption": [
"FIGURE 1 | Representation of our processing pipeline. Existing methods refers to our previous work described in Papegnies et al. (2017b) (content-based method) and Papegnies et al. (2019) (graph-based method), whereas the contribution presented in this article appears on the right side (fusion str... | [
"What is the proposed algorithm or model architecture?",
"What fusion methods are applied?",
"What graph-based features are considered?"
] | [
[
"1905.07894-Fusion-0",
"1905.07894-Fusion-3",
"1905.07894-Introduction-4",
"1905.07894-Fusion-2",
"1905.07894-Fusion-1"
],
[
"1905.07894-Fusion-0",
"1905.07894-Fusion-3",
"1905.07894-Fusion-2",
"1905.07894-3-Figure1-1.png",
"1905.07894-Fusion-1"
],
[
"1905.07894... | [
"They combine content- and graph-based methods in new ways.",
"Early fusion, late fusion, hybrid fusion.",
"Top graph based features are: Coreness Score, PageRank Centrality, Strength Centrality, Vertex Count, Closeness Centrality, Closeness Centrality, Authority Score, Hub Score, Reciprocity and Closeness Cent... | 405 |
1610.09722 | Represent, Aggregate, and Constrain: A Novel Architecture for Machine Reading from Noisy Sources | In order to extract event information from text, a machine reading model must learn to accurately read and interpret the ways in which that information is expressed. But it must also, as the human reader must, aggregate numerous individual value hypotheses into a single coherent global analysis, applying global constra... | {
"paragraphs": [
[
"Recent work in the area of machine reading has focused on learning in a scenario with perfect information. Whether identifying target entities for simple cloze style queries BIBREF0 , BIBREF1 , or reasoning over short passages of artificially generated text BIBREF2 , short stories BIBRE... | {
"answers": [
{
"annotation_id": [
"2fe3fa0649460b7dfcb1a7c701462b1d88467eb8",
"35302e8088644acf13b1c4ae60291f27d42a8869"
],
"answer": [
{
"evidence": [
"We evaluate configurations of our proposed architecture across three measures. The first is a m... | {
"caption": [
"Figure 1: An example news cluster. While we assume all documents mention the target flight, inaccurate information (d1), incorrect labels (d2), and mentions of non-topical events (d5) are frequent sources of noise the model must deal with. Red tokens indicate mentions of values, i.e. candidate ans... | [
"what dataset did they use?"
] | [
[
"1610.09722-Data-0"
]
] | [
"Event dataset with news articles"
] | 407 |
1908.05925 | Incorporating Word and Subword Units in Unsupervised Machine Translation Using Language Model Rescoring | This paper describes CAiRE's submission to the unsupervised machine translation track of the WMT'19 news shared task from German to Czech. We leverage a phrase-based statistical machine translation (PBSMT) model and a pre-trained language model to combine word-level neural machine translation (NMT) and subword-level NM... | {
"paragraphs": [
[
"Machine translation (MT) has achieved huge advances in the past few years BIBREF1, BIBREF2, BIBREF3, BIBREF4. However, the need for a large amount of manual parallel data obstructs its performance under low-resource conditions. Building an effective model on low resource data or even in... | {
"answers": [
{
"annotation_id": [
"268971b483fc22f0c6b7cafb607c320314323f37",
"65358bdbd80bcf5f3135dd2dc8846f7cc62ccb48"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Unsupervised translation results. We report the scores of several eval... | {
"caption": [
"Figure 1: The illustration of our system. The translation procedure can be divided into five steps: (a) preprocessing, (b) translation generation (§2.1) from word-level NMT, subword-level NMT, and PBSMT. In the training, we fine-tune word-level and subword-level NMT models with pseudo-parallel dat... | [
"How is the quality of the translation evaluated?"
] | [
[
"1908.05925-6-Table2-1.png",
"1908.05925-Experiments ::: Results-0"
]
] | [
"They report the scores of several evaluation methods for every step of their approach."
] | 408 |
1709.06136 | Iterative Policy Learning in End-to-End Trainable Task-Oriented Neural Dialog Models | In this paper, we present a deep reinforcement learning (RL) framework for iterative dialog policy optimization in end-to-end task-oriented dialog systems. Popular approaches in learning dialog policy with RL include letting a dialog agent to learn against a user simulator. Building a reliable user simulator, however, ... | {
"paragraphs": [
[
"Task-oriented dialog system is playing an increasingly important role in enabling human-computer interactions via natural spoken language. Different from chatbot type of conversational agents BIBREF0 , BIBREF1 , BIBREF2 , task-oriented dialog systems assist users to complete everyday ta... | {
"answers": [
{
"annotation_id": [
"2d409b8b808ba940c1f31a01c6d38f8376c3faab",
"f9e43046a247c474c55b566ea6319bfcca32762a"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2. Evaluation results on the converted DSTC2 dataset."
],
... | {
"caption": [
"Fig. 1. Dialog agent network architecture.",
"Fig. 2. User simulator network architecture.",
"Fig. 3. System architecture for joint dialog agent and user simulator policy optimization with deep RL",
"Table 1. Statistics of the dataset",
"Fig. 5. Learning curve for average reward",
... | [
"By how much do they improve upon supervised traning methods?"
] | [
[
"1709.06136-Results and Analysis-2",
"1709.06136-6-Table2-1.png"
]
] | [
"A2C and REINFORCE-joint for joint policy optimization achieve the improvement over SL baseline of 29.4% and 25.7 susses rate, 1.21 AND 1.28 AvgRevard and 0.25 and -1.34 AvgSucccess Turn Size, respectively."
] | 411 |
2003.08769 | Personalized Taste and Cuisine Preference Modeling via Images | With the exponential growth in the usage of social media to share live updates about life, taking pictures has become an unavoidable phenomenon. Individuals unknowingly create a unique knowledge base with these images. The food images, in particular, are of interest as they contain a plethora of information. From the i... | {
"paragraphs": [
[
"A picture is worth a thousand words. Complex ideas can easily be depicted via an image. An image is a mine of data in the 21st century. With each person taking an average of 20 photographs every day, the number of photographs taken around the world each year is astounding. According to ... | {
"answers": [
{
"annotation_id": [
"282696575ffc20f26d205ec77f649ce5c76a8111",
"d799bb0f4cb189a6a45b542646ccd04441840d85"
],
"answer": [
{
"evidence": [
"METHODOLOGY",
"The real task lies in converting the image into interpretable data t... | {
"caption": [
"TABLE I UNIQUE INGREDIENTS",
"Fig. 1. Count of Recipes per Cuisine",
"Fig. 2. The above diagram represents the flow of the data pipeline along with the Models used.",
"Fig. 3. The top 20 most frequently occurring food labels",
"Fig. 4. The sum of the probabilities of each label occ... | [
"Is this done in form of unsupervised (clustering) or suppervised learning?",
"Does this study perform experiments to prove their claim that indeed personalized profiles will have inclination towards particular cuisines?"
] | [
[
"2003.08769-METHODOLOGY-0",
"2003.08769-METHODOLOGY ::: DATA PRE PROCESSING ::: To Classify Images as Food Images-0",
"2003.08769-METHODOLOGY-1",
"2003.08769-METHODOLOGY-3",
"2003.08769-METHODOLOGY ::: DATA PRE PROCESSING ::: To Remove Images with People-0",
"2003.08769-METHODOLOGY-2",
... | [
"Unsupervised",
"The study features a radar chart describing inclinations toward particular cuisines, but they do not perform any experiments"
] | 412 |
1909.13466 | Regressing Word and Sentence Embeddings for Regularization of Neural Machine Translation | In recent years, neural machine translation (NMT) has become the dominant approach in automated translation. However, like many other deep learning approaches, NMT suffers from overfitting when the amount of training data is limited. This is a serious issue for low-resource language pairs and many specialized translati... | {
"paragraphs": [
[
"Machine translation (MT) is a field of natural language processing (NLP) focussing on the automatic translation of sentences from a source language to a target language. In recent years, the field has been progressing quickly mainly thanks to the advances in deep learning and the advent... | {
"answers": [
{
"annotation_id": [
"4fd61e9949de2140a958d03822e29fe311887ec8",
"b880ae3014d9f608854bc7fbfa57026fe4b23ace"
],
"answer": [
{
"evidence": [
"Extensive experimentation over four language pairs of different dataset sizes (from small to la... | {
"caption": [
"Fig. 1: Baseline NMT model. (Left) The encoder receives the input sentence and generates a context vector cj for each decoding step using an attention mechanism. (Right) The decoder generates one-by-one the output vectors pj , which represent the probability distribution over the target vocabulary... | [
"What baselines do they compare to?",
"What training set sizes do they use?",
"What languages do they experiment with?"
] | [
[
"1909.13466-6-TableIII-1.png",
"1909.13466-6-TableII-1.png",
"1909.13466-6-TableI-1.png",
"1909.13466-6-TableIV-1.png",
"1909.13466-The Baseline NMT model-0",
"1909.13466-Introduction-7"
],
[
"1909.13466-Experiments ::: Datasets-1",
"1909.13466-Experiments ::: Datasets-0",
... | [
"A neural encoder-decoder architecture with attention using LSTMs or Transformers",
"89k, 114k, 291k, 5M",
"German-English, English-French, Czech-English, Basque-English pairs"
] | 413 |
1910.09916 | Automatic Extraction of Personality from Text: Challenges and Opportunities | In this study we examined the possibility to extract personality traits from a text. We created an extensive dataset by having experts annotate personality traits in a large number of texts from multiple online sources. From these annotated texts we selected a sample and made further annotations ending up with a large ... | {
"paragraphs": [
[
"Since the introduction of the personality concept, psychologists have worked to formulate theories and create models describing human personality and reliable measure to accordingly. The filed has been successful to bring forth a number of robust models with corresponding measures. One ... | {
"answers": [
{
"annotation_id": [
"28bc97f9eb3f6b365ba9add8d1a37d5159ba085f",
"8d30bad02fb2f8911b066f27ced0c7452b6aa0ac"
],
"answer": [
{
"evidence": [
"As our language model we used ULMFiT BIBREF21. ULMFiT is an NLP transfer learning algorithm tha... | {
"caption": [
"Figure 1. Workflow for building the models",
"Table III NUMBER OF TRAINING SAMPLES FOR EACH OF THE PERSONALITY FACTORS",
"Figure 2. Distribution of labeled samples for each of the factors of the large dataset.",
"Figure 3. Distribution of labeled samples for each of the factors of the ... | [
"What is the agreement of the dataset?"
] | [
[
"1910.09916-Model Training ::: Annotation-4"
]
] | [
"Answer with content missing: (Table 2): Krippednorff's alpha coefficient for dataset is: Stability -0.26, Extraversion 0.07, Openness 0.36, Agreeableness 0.51, Conscientiousness 0.31"
] | 414 |
1803.05160 | How to evaluate sentiment classifiers for Twitter time-ordered data? | Social media are becoming an increasingly important source of information about the public mood regarding issues such as elections, Brexit, stock market, etc. In this paper we focus on sentiment classification of Twitter data. Construction of sentiment classifiers is a standard text mining task, but here we address the... | {
"paragraphs": [
[
"Social media are becoming an increasingly important source of information about the public mood regarding issues such as elections, Brexit, stock market, etc. In this paper we focus on sentiment classification of Twitter data. Construction of sentiment classifiers is a standard text min... | {
"answers": [
{
"annotation_id": [
"2913dc7aa957eba425884f4446cf377cce62b244",
"74d87d1dd4df2f7416dc0d3accec833d9efb48c4"
],
"answer": [
{
"evidence": [
"We compare six estimation procedures in terms of different types of errors they incur. The erro... | {
"caption": [
"Table 1. Sentiment label distribution of Twitter datasets in 13 languages. The last column is a qualitative assessment of the annotation quality, based on the levels of the self- and inter-annotator agreement.",
"Fig 1. Creation of the estimation and gold standard data. Each labeled language d... | [
"Which European languages are targeted?"
] | [
[
"1803.05160-Data and models-0",
"1803.05160-5-Table1-1.png",
"1803.05160-Introduction-5"
]
] | [
"Albanian\nBulgarian\nEnglish\nGerman\nHungarian\nPolish\nPortuguese\nRussian\nSer/Cro/Bos\nSlovak\nSlovenian\nSpanish\nSwedish"
] | 415 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.