sections
listlengths
0
910
pub_date
stringclasses
722 values
doi
stringlengths
0
570
references
listlengths
0
835
formulas
listlengths
0
679
title
stringlengths
0
235
abstract
stringlengths
0
7.77k
authors
stringlengths
0
11.9k
figures
listlengths
0
270
citation_data
stringlengths
2
160k
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b3", "b37", "b4", "b11", "b14", "b34", "b13", "b25", "b19", "b42" ], "table_ref": [], "text": "Pre-trained language models (PLMs) have quickly become a staple in t...
2023-10-23
10.18653/v1/2021.findings-emnlp.410
[ { "authors": "Alan Ansell; Maria Edoardo; Jonas Ponti; Sebastian Pfeiffer; Goran Ruder; Ivan Glavaš; Anna Vulić; Korhonen", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "MAD-G: Multilingual adapter generation for efficient cross-lingual transfer", "year": "20...
[ { "formula_coordinates": [ 4, 78.41, 688.88, 201.98, 25.55 ], "formula_id": "formula_0", "formula_text": "RIPL(S AL , S PL ) = AUC(S AL ) -AUC(S PL ) 1 -AUC(S PL )" } ]
Parameter-Efficient Language Model Tuning with Active Learning in Low-Resource Settings
Pre-trained language models (PLMs) have ignited a surge in demand for effective finetuning techniques, particularly in low-resource domains and languages. Active learning (AL), a set of algorithms designed to decrease labeling costs by minimizing label complexity, has shown promise in confronting the labeling bottlenec...
Josip Jukić; Jan Šnajder Takelab
[ { "figure_caption": "Figure 3 :3Figure3: AL learning curves compared with random sampling on the SUBJ dataset. The first and the second rows show learning curves for adapters without and with TAPT, respectively. The third row shows learning curves for FFT, without and with TAPT. The results are averaged over fi...
[{"Category": "Methodological Basis", "Citation": "(Cohn et al., 1996)", "Explanation": "The cited work introduces the concept of active learning as a potential solution to the challenge of data labeling in low-resource settings, which the citing paper builds upon in its research on efficient finetuning methods for PLM...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b12", "b36", "b15", "b34", "b2", "b3", "b8" ], "table_ref": [], "text": "Document comprehension involves interpreting words that can alter the meaning of the text based on their pla...
2024-01-22
10.18653/v1/D19-1345
[ { "authors": "Shilpa Arora; Mahesh Joshi; Carolyn Rosé", "journal": "", "ref_id": "b0", "title": "Identifying types of claims in online customer reviews", "year": "2009" }, { "authors": "Iz Beltagy; Matthew E Peters; Arman Cohan", "journal": "", "ref_id": "b1", "title": "Long...
[]
Connecting the Dots: What Graph-Based Text Representations Work Best for Text Classification using Graph Neural Networks?
Given the success of Graph Neural Networks (GNNs) for structure-aware machine learning, many studies have explored their use for text classification, but mostly in specific domains with limited data characteristics. Moreover, some strategies prior to GNNs relied on graph mining and classical machine learning, making it...
Margarita Bugueño; Gerard De Melo
[ { "figure_caption": "1http://derekgreene.com/bbc/ 2 https://zenodo.org/HNDrecord", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: Graph Construction Methods. Given the input text \"Start working! The soone...
[{"Category": "Methodological Basis", "Citation": "(Castillo et al., 2015)", "Explanation": "The cited work provides a foundation for the use of graphs in text classification tasks, as it discusses the applicability and effectiveness of graphs in broader settings."}, {"Category": "Extension or Continuation", "Citation"...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b5", "b11", "b12", "b13", "b14", "b15", "b16", "b17", ...
2023-07
10.1177/0361198105191800112
[ { "authors": "", "journal": "OECD", "ref_id": "b0", "title": "The Economic Consequences of Outdoor Air Pollution", "year": "2016" }, { "authors": "R N Annavarapu; S Kathi", "journal": "Environmental pollution", "ref_id": "b1", "title": "Cognitive disorders in children associa...
[ { "formula_coordinates": [ 4, 147.2, 542.39, 325.78, 11.3 ], "formula_id": "formula_0", "formula_text": "M = S + N(M, S, N ∈ R N c ×SR )(1)" }, { "formula_coordinates": [ 4, 147.2, 605.5, 325.78, 14.13 ], "formula_id": "form...
Real-Time Idling Vehicles Detection Using Combined Audio-Visual Deep Learning
Combustion vehicle emissions contribute to poor air quality and release greenhouse gases into the atmosphere, and vehicle pollution has been associated with numerous adverse health effects. Roadways with extensive waiting and/or passenger drop-off, such as schools and hospital drop-off zones, can result in a high incid...
Xiwen Li; Tristalee Mangin; Surojit Saha; Rehman Mohammed; Evan Blanchard; Dillon Tang; Henry Poppe; Nathan Searle; Ouk Choi; Kerry Kelly; Ross Whitaker
[ { "figure_caption": "Figure 1 .1Figure 1. Proposed System Design. The yellow arrow collects vehicle motion, engine sound, and pollutant concentrations. The red arrow represents data transmission to the computer. The green arrow denotes sending the predicted idling status to the displays. The blue arrow represen...
[{"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work provides the global data on the impact of poor air quality on human health, which serves as a methodological basis for the citing paper to analyze the effects of vehicle pollution on health and healthcare costs."}, {"Category": "Sup...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b1", "b30", "b53", "b29", "b33" ], "table_ref": [], "text": "The meaning and import of an utterance are often underdetermined by the utterance itself. Human interpretation involves making infer...
2023-10-25
10.18653/v1/D15-1008
[ { "authors": "Elliott Ash; Germain Gauthier; Philine Widmer", "journal": "", "ref_id": "b0", "title": "Relatio: Text semantics capture political and economic narratives", "year": "2022" }, { "authors": "Kent Bach", "journal": "", "ref_id": "b1", "title": "Conversational impli...
[ { "formula_coordinates": [ 2, 306.43, 160.53, 194.83, 27.4 ], "formula_id": "formula_0", "formula_text": "### INPUT: { utterance } OUTPUT: { inference 1 } | { inference 2 } | . . ." }, { "formula_coordinates": [ 7, 331.04, 371.46, 193.97,...
Natural Language Decompositions of Implicit Content Enable Better Text Representations
When people interpret text, they rely on inferences that go beyond the observed language itself. Inspired by this observation, we introduce a method for the analysis of text that takes implicitly communicated content explicitly into account. We use a large language model to produce sets of propositions that are inferen...
Alexander Hoyle; Rupak Sarkar; Pranav Goel; Philip Resnik
[ { "figure_caption": "Federallands and waters must not be opened up to fossil fuel extraction. Public lands are national treasures that should be protected for future generations, not auctioned off to the fossil fuel industry's highest bidders.", "figure_data": "", "figure_id": "fig_0", "figure_label...
[{"Category": "Methodological Basis", "Citation": "(Bach, 1994)", "Explanation": "The cited work by Bach (1994) provides foundational theories and methods for understanding the meaning and import of utterances, which the citing paper builds upon in its research on human interpretation of text data."}, {"Category": "Met...
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b29", "b28", "b71", "b8", "b42", "b65", "b13", "b73", "b59", "b2", "b73", "b22", "b16", "b24", "b56", "b32", "b12", "b6", ...
2024-03-11
10.48550/arXiv.1106.6251
[ { "authors": "Ekin Akyürek; Tolga Bolukbasi; Frederick Liu; Binbin Xiong; Ian Tenney; Jacob Andreas; Kelvin Guu", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Towards tracing knowledge in language models back to the training data", "year": "2022-12" }, { ...
[ { "formula_coordinates": [ 3, 242.49, 536.67, 262.18, 8.99 ], "formula_id": "formula_0", "formula_text": "kGLM(x) := W κ(x, X) + b,(1)" }, { "formula_coordinates": [ 3, 244.64, 617.53, 260.03, 8.99 ], "formula_id": "formula_...
FAITHFUL AND EFFICIENT EXPLANATIONS FOR NEU-RAL NETWORKS VIA NEURAL TANGENT KERNEL SUR-ROGATE MODELS
A recent trend in explainable AI research has focused on surrogate modeling, where neural networks are approximated as simpler ML algorithms such as kernel machines. A second trend has been to utilize kernel functions in various explainby-example or data attribution tasks. In this work, we combine these two trends to a...
Andrew Engel; Zhichao Wang; Natalie S Frank; Ioana Dumitriu; Sutanay Choudhury; Anand Sarwate; Tony Chiang
[ { "figure_caption": "Figure 1 :1Figure 1: Linear Realization of Bert-base Model. Each panel shows a linearization of a Bert-base transfer model, initialized from a different seed. An invertible mapping is fit between the kGLM and NN to transform the kGLM's final activations to the NN's, described in Appendix L....
[{"Category": "Methodological Basis", "Citation": "(Leavitt & Morcos, 2020)", "Explanation": "The cited work by Leavitt and Morcos (2020) highlights the importance of explainability in the field of deep neural networks, which serves as a foundational basis for the citing paper to address the same issue."}, {"Category":...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b2" ], "table_ref": [], "text": "Generative Flow Networks (GFlowNets, GFNs; Bengio et al., 2021a,b) are probabilistic models over discrete sample spaces with a compositional structure. They are also stochastic sequent...
2023-08-29
10.48550/arXiv.2302.05446
[ { "authors": "E Bengio; M Jain; M Korablyov; D Precup; Y Bengio", "journal": "", "ref_id": "b0", "title": "Flow network based generative models for non-iterative diverse candidate generation", "year": "2021" }, { "authors": "Y Bengio; T Deleu; E J Hu; S Lahlou; M Tiwari; E Bengio; T Dele...
[]
torchgfn: A PyTorch GFlowNet library
The growing popularity of generative flow networks (GFlowNets or GFNs) from a range of researchers with diverse backgrounds and areas of expertise necessitates a library which facilitates the testing of new features such as training losses that can be easily compared to standard benchmark implementations, or on a set o...
Salem Lahlou; Joseph D Viviano; Mila Victor Schmidt; Yoshua Bengio
[ { "figure_caption": "Figure 1 :1Figure 1: Hierarchy of the codebase for the v1 release. States and Actions are top-level abstractions used to interface between the stateless Environments and Containers, which are generic objects used by the remainder of the codebase. Containers are utilized by both Samplers and...
[{"Category": "Supporting Evidence", "Citation": "(Lahlou et al., 2023)", "Explanation": "The cited work introduces the concept of GFlowNet and its continuous variant, which the citing paper builds upon in the development of the torchgfn library for fast prototyping of GFlowNet related algorithms in PyTorch."}, {"Categ...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b15", "b17", "b33", "b16", "b45", "b13", "b7", "b9", "b14", "b3", "b12", "b40", "b22", "b1", "b37", "b4" ], "table_ref": [], "t...
2023-11-05
10.18653/v1/2022.naacl-main.135
[ { "authors": "Eneko Agirre; Carmen Banea; Daniel Cer; Mona Diab; Aitor González-Agirre; Rada Mihalcea; German Rigau; Janyce Wiebe", "journal": "", "ref_id": "b0", "title": "Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation", "year": "2016" }, { "a...
[ { "formula_coordinates": [ 3, 70.87, 581.19, 189.71, 10.82 ], "formula_id": "formula_0", "formula_text": "• Enc(s 1 ⊖ s 2 ) ≈ f diff (Enc(s 1 ), Enc(s 2 ))" }, { "formula_coordinates": [ 4, 69.59, 125.99, 216.04, 56.63 ], "f...
Bridging Continuous and Discrete Spaces: Interpretable Sentence Representation Learning via Compositional Operations
Traditional sentence embedding models encode sentences into vector representations to capture useful properties such as the semantic similarity between sentences. However, in addition to similarity, sentence semantics can also be interpreted via compositional operations such as sentence fusion or difference. It is uncl...
James Y Huang; Wenlin Yao; Kaiqiang Song; Hongming Zhang; Muhao Chen; Dong Yu
[ { "figure_caption": " 34.1 51.0 28.1 45.0 Model performance on four textual generation tasks for interpretability evaluation. Unsup. Contr. and Sup. Contr. represents Unsupervised and Supervised Contrastive baselines respectively. We report ROUGE-1/2/L scores.", "figure_data": "SRoBERTa49.0 21.9 39.0 39.1 1...
[{"Category": "Methodological Basis", "Citation": "(Conneau et al., 2018)", "Explanation": "The cited work by Conneau et al. provides a method for probing individual linguistic properties in sentence embeddings, which the citing paper adopts to better understand the information encoded in the sentence representation sp...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3" ], "table_ref": [], "text": "The priority focus of modern face recognition studies has been in-line with that of representation learning studies: amplifying the representative p...
2023-05-24
[ { "authors": "Yaobin Zhang; Weihong Deng; Yaoyao Zhong; Jiani Hu; Xian Li; Dongyue Zhao; Dongchao Wen", "journal": "", "ref_id": "b0", "title": "Adaptive label noise cleaning with meta-supervision for deep face recognition", "year": "2021" }, { "authors": "Boxiao Liu; Guanglu Song; Manyu...
[ { "formula_coordinates": [ 2, 335.22, 550.21, 209.89, 30.32 ], "formula_id": "formula_0", "formula_text": "L cls = 1 N N i=1 -log e cos θy i e cos θy i + C j̸ =yi e cos θj ,(1)" }, { "formula_coordinates": [ 3, 112.11, 689.93, 174.25, ...
FaceFusion: Exploiting Full Spectrum of Multiple Datasets
The size of training dataset is known to be among the most dominating aspects of training high-performance face recognition embedding model. Building a large dataset from scratch could be cumbersome and time-intensive, while combining multiple already-built datasets poses the risk of introducing large amount of label n...
Chiyoung Song; Dongjae Lee; Naver Cloud
[ { "figure_caption": "Figure 1 :1Figure 1: Overview of FaceFusion. L cls,k and L cls shares the same class proxies. While L cls,k limits the softmax calculations to each dataset, L cls merges the class proxies of same identity, and removes the barriers between the datasets. GRL reverses the direction of gradient...
[{"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work on rectifier models provides a method for adjusting identities in a way that is relevant to the citing paper."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work on posterior data cleaning provi...
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b39", "b1", "b8", "b33", "b31", "b38", "b2", "b17", "b34", "b34", "b30" ], "table_ref": [], "text": "Tracking entity states in procedural ...
10.18653/v1/2020.findings-emnlp.91
[ { "authors": "Antoine Bosselut; Omer Levy; Ari Holtzman; Corin Ennis; Dieter Fox; Yejin Choi", "journal": "", "ref_id": "b0", "title": "Simulating action dynamics with neural process networks", "year": "2017" }, { "authors": "Antoine Bosselut; Omer Levy; Ari Holtzman; Corin Ennis; Dieter...
[]
OPENPI2.0: An Improved Dataset for Entity Tracking in Texts
Much text describes a changing world (e.g., procedures, stories, newswires), and understanding them requires tracking how entities change. An earlier dataset, OPENPI, provided crowdsourced annotations of entity state changes in text. However, a major limitation was that those annotations were free-form and did not iden...
Li Zhang; Hainiu Xu; Abhinav Kommula; Chris Callison-Burch; Niket Tandon
[ { "figure_caption": "Figure 1 :1Figure 1: For each step in a procedure, OPENPI annotates the state change of attributes of entities. Our OPENPI2.0 additionally (shown in red boxes and texts) canonicalizes the entities and attributes and includes their salience scores.", "figure_data": "", "figure_id": "...
[{"Category": "Methodological Basis", "Citation": "(Weston et al., 2015)", "Explanation": "The cited work by Weston et al. provides a foundational method for tracking entity states in procedural texts, which the citing paper builds upon in its own research."}, {"Category": "Methodological Basis", "Citation": "(Bosselut...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b16", "b10", "b3" ], "table_ref": [], "text": "Pretrained large language models (LLMs) have recently seen widespread adoption by users worldwide due to their capabilities with generative tasks ranging from...
2024-02-13
10.18653/v1/2020.acl-main.485
[ { "authors": "Abubakar Abid; Maheen Farooqi; James Zou", "journal": "Nature Machine Intelligence", "ref_id": "b0", "title": "Large language models associate muslims with violence", "year": "2021" }, { "authors": "Kabir Ahuja; Harshita Diddee; Rishav Hada; Millicent Ochieng; Krithika Rame...
[ { "formula_coordinates": [ 5, 330.4, 78.67, 169.74, 113.25 ], "formula_id": "formula_0", "formula_text": "CS(ci, cj) = 1 if ci = cj, 0 otherwise Con CS(t) = CS(cKB, ci) Non CS(t) = 1 n c∈C non CS(cKB, c) ∆ CS(t) = Con CS -Non CS Non CS Cst CS(t) = 1 n(n -1) n i=1 n j=1,...
This Land is {Your, My} Land: Evaluating Geopolitical Bias in Language Models through Territorial Disputes
Do the Spratly Islands belong to China, the Philippines, or Vietnam? A pretrained large language model (LLM) may answer differently if asked in the languages of each claimant country: Chinese, Tagalog, or Vietnamese. This contrasts with a multilingual human, who would likely answer consistently. In this paper, we show ...
Bryan Li; Samar Haider; Chris Callison-Burch
[ { "figure_caption": "Figure 2 :2Figure 2: Illustration of comparisons made for the CS metrics. KB CS, Control CS, and Non-control CS all compare between the KB country and a response, while Consistency CS compares between responses.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", ...
[{"Category": "Supporting Evidence", "Citation": "(Petroni et al., 2019)", "Explanation": "The cited work by Petroni et al. (2019) provides evidence that LLMs do internalize some relational knowledge, which is a foundational element for the discussion in the citing paper about the limitations of LLMs in terms of their ...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b2", "b4", "b20", "b0" ], "table_ref": [], "text": "It is well known that any domain gap between training and test data hurts the performance of machine learning models in general, and object detecto...
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "MMDetection3D: Open-MMLab next-generation platform for general 3D object detection", "year": "2020" }, { "authors": "Jeevan Devaranjan; Amlan Kar; Sanja Fidler", "journal": "", "ref_id": "b1", "title": "Meta-Sim2: Unsu...
[]
Realistically distributing object placements in synthetic training data improves the performance of vision-based object detection models
When training object detection models on synthetic data, it is important to make the distribution of synthetic data as close as possible to the distribution of real data. We investigate specifically the impact of object placement distribution, keeping all other aspects of synthetic data fixed. Our experiment, training ...
Setareh Dabiri; Vasileios Lioutas; Berend Zwartsenberg; Yunpeng Liu; Matthew Niedoba; Xiaoxuan Liang; Dylan Green; Jonathan Wilder Lavington; Frank Wood; Adam Ścibior; Inverted Ai
[ { "figure_caption": "Figure 1 .1Figure 1. Sample training set images generated using CARLA. We compare the baseline placement (left) with a realistic one (right).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Fi...
[{"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work, CARLA driving simulator, is used as a source of training data in the citing paper to generate synthetic data for the study of the impact of object placement distribution on the performance of vision models in driving contexts."}, {...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b4", "b12", "b26", "b33", "b30", "b28", "b19" ], "table_ref": [], "text": "Fact-checking is an essential task in natural language processing, focusing on evaluating the accuracy of ...
2024-04-01
10.18653/v1/2023.emnlp-demo.10
[ { "authors": "Mubashara Akhtar; Oana Cocarascu; Elena Simperl", "journal": "", "ref_id": "b0", "title": "Pubhealthtab: A public health table-based dataset for evidence-based fact checking", "year": "2022" }, { "authors": "Firoj Alam; Stefano Cresci; Tanmoy Chakraborty; Fabrizio Silvestri...
[]
Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models
Fact-checking is an essential task in NLP that is commonly utilized to validate the factual accuracy of a piece of text. Previous approaches mainly involve the resource-intensive process of fine-tuning pre-trained language models on specific datasets. In addition, there is a notable gap in datasets that focus on factch...
Miaoran Li; Baolin Peng; Michel Galley; Jianfeng Gao; Zhu Zhang
[ { "figure_caption": "Figure 2 :2Figure 2: Overview of SELF-CHECKER. The framework consists of four plug-and-play modules: (1) claim processor, (2) query generator, (3) evidence seeker, and (4) verdict counselor.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figu...
[{"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work, GPT-3, is a large language model that the citing paper uses as a basis for their research on fact-checking and the generation of false information in LLMs."}, {"Category": "Extension or Continuation", "Citation": "...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b30", "b2", "b31", "b19", "b48", "b34", "b18" ], "table_ref": [], "text": "Retrieval-augmented language models, which integrate non-parametric dense retrieval with autoregressive ne...
2023-05-24
10.18653/v1/P17-1171
[ { "authors": "Uri Alon; Frank Xu; Junxian He; Sudipta Sengupta; Dan Roth; Graham Neubig", "journal": "", "ref_id": "b0", "title": "Neuro-symbolic language modeling with automaton-augmented retrieval", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b1", "tit...
[ { "formula_coordinates": [ 3, 77.93, 96.49, 204.14, 26.03 ], "formula_id": "formula_0", "formula_text": "P KN N (w t |q t ) ∝ (k i ,v i ) 1 wt=v i exp(-d(k i , q t ))" }, { "formula_coordinates": [ 3, 70.87, 198.97, 218.27, 14.63 ...
kNN-LM Does Not Improve Open-ended Text Generation
In this paper, we study the generation quality of interpolation-based retrieval-augmented language models (LMs). These methods, best exemplified by the kNN-LM (Khandelwal et al., 2020), interpolate the LM's predicted distribution of the next word with a distribution formed from the most relevant retrievals for a given ...
Shufan Wang; Yixiao Song; Andrew Drozdov; Aparna Garimella; Varun Manjunatha; Mohit Iyyer
[ { "figure_caption": "Figure 1 :1Figure1: The plot presents how many times each type of generations (kNN-LM or GPT-2) is chosen by the evaluators. The dark area in each bar shows that the choices were made confidently. The light area represents the choices between kNN-LM and GPT-2 that were hard but the evaluato...
[{"Category": "Methodological Basis", "Citation": "(Metzler et al., 2022)", "Explanation": "The cited work by Metzler et al. provides a strong empirical validation of retrieval-augmented language models, which the citing paper builds upon to study interpolation-based LMs."}, {"Category": "Methodological Basis", "Citati...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b7", "b5", "b5", "b6", "b6", "b3", "b4", "b2", "b9" ], "table_ref": [], "text": "The success of large language models (LLMs) at generating human-like text has spurred a ...
10.1609/aaai.v34i05.6239
[ { "authors": "", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Proceedings of the Fifth Black-boxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP", "year": "2022" }, { "authors": "Yonatan Bisk; Rowan Zellers; Ronan Le Bras; Jianfeng G...
[]
Testing Causal Models of Word Meaning in GPT-3 and -4
Large Language Models (LLMs) have driven extraordinary improvements in NLP. However, it is unclear how such models represent lexical concepts-i.e., the meanings of the words they use. This paper evaluates the lexical representations of GPT-3 and GPT-4 through the lens of HIPE theory, a theory of concept representations...
Sam Musker; Ellie Pavlick
[ { "figure_caption": "One day Jane wanted to wipe up a water spill on the kitchen floor[...]. The object consisted of a bundle of plastic bags attached to a 4-foot long stick. [...] pressed it against the water spill.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "...
[{"Category": "Methodological Basis", "Citation": "(Forbes et al., 2019)", "Explanation": "The cited work by Forbes et al. provides a method for measuring the extent to which LLMs have good representations of word meanings by focusing on physical objects and their properties and affordances."}, {"Category": "Methodolog...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18",...
2023-05-28
10.1007/978-3-319-21903-5_8
[ { "authors": "S E Schaeffer", "journal": "Computer science review", "ref_id": "b0", "title": "Graph clustering", "year": "2007" }, { "authors": "Y Zhou; H Cheng; J X Yu", "journal": "", "ref_id": "b1", "title": "Graph clustering based on structural/attribute similarities", ...
[ { "formula_coordinates": [ 4, 123, 65.75, 265.34, 24.74 ], "formula_id": "formula_0", "formula_text": "Hψ (x) = - 2 2m ∇ 2 + v(x) ψ (x) = Eψ (x)(1)" }, { "formula_coordinates": [ 4, 165.72, 186.62, 222.63, 24.44 ], "formula_...
Graph Analysis Using a GPU-based Parallel Algorithm: Quantum Clustering
The article introduces a new method for applying Quantum Clustering to graph structures. Quantum Clustering (QC) is a novel densitybased unsupervised learning method that determines cluster centers by constructing a potential function. In this method, we use the Graph Gradient Descent algorithm to find the centers of c...
Zhe Wang; Zhijie He; Ding Liu
[ { "figure_caption": "Fig. 1 A1Fig. 1 A classification of Graph Clustering;", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Algorithm 11Calculating the Potential function for each data point Require: graph : graph structure d...
[{"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work introduces the concept of graph clustering and its application in various fields, providing a methodological basis for the citing paper to discuss the techniques and applications of graph clustering."}, {"Category": "Supporting Evid...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "As the volume of biomedical literature continues to grow, biomedical entity linking and event extraction tasks have received increasingly more at-1 Equal contribution. Binding to laminin-1" }, { ...
2023-05-24
10.18653/v1/2021.naacl-main.205
[ { "authors": "Rico Angell; Sunil Monath; Nishant Mohan; Andrew Yadav; Mccallum", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Clusteringbased inference for biomedical entity linking", "year": "2021" }, { "authors": "Amos Bairoch; Rolf Apweiler; Cathy...
[ { "formula_coordinates": [ 3, 305.82, 436.44, 98.02, 13.87 ], "formula_id": "formula_0", "formula_text": "x E = [x E 1 , x E 2 , ..., x E n ]" }, { "formula_coordinates": [ 4, 426.06, 117.54, 98.35, 13.87 ], "formula_id": "f...
Iteratively Improving Biomedical Entity Linking and Event Extraction via Hard Expectation-Maximization
Biomedical entity linking and event extraction are two crucial tasks to support text understanding and retrieval in the biomedical domain. These two tasks intrinsically benefit each other: entity linking disambiguates the biomedical concepts by referring to external knowledge bases and the domain knowledge further prov...
Xiaochu Li; Minqian Liu; Zhiyang Xu; Lifu Huang
[ { "figure_caption": "TheCSC-1, which associates with ICP-1, directly binds to the BIR-1.GO:0001817Process that modulates the frequencey, rate, or extent of production of", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "induces...
[{"Category": "Methodological Basis", "Citation": "(Kim et al., 2009)", "Explanation": "The cited work by Kim et al. provides a method for retrieving and organizing information related to gene functions, bio-molecule relations, and bio-molecule behaviors from unstructured texts, which the citing paper adopts in its res...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b28", "b6", "b40" ], "table_ref": [], "text": "Scientific Opinion Summarization provides a succinct synopsis for scientific documents and helps readers recap salient information and understand the professi...
2023-11-13
10.18653/v1/2021.emnlp-main.528
[ { "authors": "Reinald Kim Amplayo; Stefanos Angelidis; Mirella Lapata", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Aspect-controllable opinion summarization", "year": "2021" }, { "authors": "Reinald Kim Amplayo; Stefanos Angelidis; Mirella Lapata", ...
[]
Scientific Opinion Summarization: Meta-review Generation with Checklist-guided Iterative Introspection
Opinions in the scientific domain can be divergent, leading to controversy or consensus among reviewers. However, current opinion summarization datasets mostly focus on product review domains, which do not account for this variability under the assumption that the input opinions are non-controversial. To address this g...
Qi Zeng; Mankeerat Sidhu; Hou Pong Chan; Lu Wang; Heng Ji
[ { "figure_caption": "Figure 5 :5Figure 5: We show the meta-reviews from human, vanilla, CGI 2 , and CGI 2 without iterative runs for the same paper. The yellow background indicates hallucinated content. The green background indicates redundant content.", "figure_data": "", "figure_id": "fig_0", "fig...
[{"Category": "Methodological Basis", "Citation": "(Hu and Liu, 2006)", "Explanation": "The cited work by Hu and Liu (2006) provides a methodological approach for identifying representative and consensus opinions in product reviews, which the citing paper may adapt or build upon in the context of scientific opinion sum...
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b21", "b22", "b12", "b16", "b4", "b35", "b22" ], "table_ref": [], "text": "With the rapid development of social platforms and digital devices, more and more video...
[ { "authors": "", "journal": "Palaskar et al", "ref_id": "b0", "title": "How2 R-1 R-2 R-L B-1 B-2 B-3 B-4 METEOR CIDEr HA", "year": "0128" }, { "authors": "Gilles Degottex; John Kane; Thomas Drugman; Tuomo Raitio; Stefan Scherer", "journal": "IEEE", "ref_id": "b1", "title": "C...
[ { "formula_coordinates": [ 3, 351.12, 643.75, 174.02, 30.99 ], "formula_id": "formula_0", "formula_text": "C ŷ = argmax y j ∈C P Θ (y j | Z),(1)" }, { "formula_coordinates": [ 3, 388.76, 699.38, 136.39, 10.69 ], "formula_id"...
Denoising Bottleneck with Mutual Information Maximization for Video Multimodal Fusion
Video multimodal fusion aims to integrate multimodal signals in videos, such as visual, audio and text, to make a complementary prediction with multiple modalities contents. However, unlike other image-text multimodal tasks, video has longer multimodal sequences with more redundancy and noise in both visual and audio m...
Shaoxiang Wu; Damai Dai; Ziwei Qin; Tianyu Liu; Binghuai Lin; Yunbo Cao; Zhifang Sui
[ { "figure_caption": "Figure 1 :1Figure 1: An example of redundancy and noise in a video. As illustrated, consecutive frames have high cosine similarity, which results in a problem of redundancy. In addition, useless information like distracting background and weak alignment between frames and transcripts compos...
[{"Category": "Methodological Basis", "Citation": "(Liu et al., 2020)", "Explanation": "The cited work by Liu et al. introduces a fusion forget gate to control the flow of information between multimodal sequences, which the citing paper adopts as a method to address the problem of redundancy and noise in video multimod...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b4", "b49", "b6", "b27", "b19", "b51", "b19", "b51" ], "table_ref": [], "text": "Survival analysis is a typical statistical task for tracking occurrence of the event of intere...
2023-05-24
[ { "authors": " Antolini", "journal": "", "ref_id": "b0", "title": "", "year": "2005" }, { "authors": "Laura Antolini; Patrizia Boracchi; Elia Biganzoli", "journal": "Stats in Medicine", "ref_id": "b1", "title": "A time-dependent discrimination index for survival data", "y...
[ { "formula_coordinates": [ 2, 390.32, 101.84, 167.68, 9.65 ], "formula_id": "formula_0", "formula_text": "p(t|x) = P r(t x = t|x) (1)" }, { "formula_coordinates": [ 2, 386.17, 160.46, 171.83, 38.61 ], "formula_id": "formula_...
Learning Survival Distribution with Implicit Survival Function
Survival analysis aims at modeling the relationship between covariates and event occurrence with some untracked (censored) samples. In implementation, existing methods model the survival distribution with strong assumptions or in a discrete time space for likelihood estimation with censorship, which leads to weak gener...
Yu Ling; Weimin Tan; Bo Yan
[ { "figure_caption": "Figure 1 :1Figure 1: Brief framework of ISF. (a) ISF takes sample x and time t as input, and predicts conditional hazard rate ĥ(t|x). (b) Based on estimated conditional hazard rates, we can derive survival distribution p(t|x) through numerical integration.", "figure_data": "", "figu...
[{"Category": "Supporting Evidence", "Citation": "[Courtiol et al., 2019]", "Explanation": "The cited work provides a medical example of using survival analysis to model the death probability of diseases, which supports the claim in the citing paper about the use of survival analysis in medical situations."}, {"Categor...
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b16", "b26", "b15", "b1", "b16", "b30", "b17", "b14", "b8", "b12", "b28", "b29", "b24", "b31", "b9", "b17", "b9", ...
2023-11-14
[ { "authors": "Huda Alamri; Vincent Cartillier; Abhishek Das; Jue Wang; Anoop Cherian; Irfan Essa; Dhruv Batra; Tim K Marks; Chiori Hori; Peter Anderson", "journal": "", "ref_id": "b0", "title": "Audio visual scene-aware dialog", "year": "2019" }, { "authors": "Satanjeev Banerjee; Alon La...
[ { "formula_coordinates": [ 5, 104.63, 71.86, 390.54, 129.94 ], "formula_id": "formula_0", "formula_text": "B L E U -1 B L E U -2 B L E U -3 B L E U -4 R O U G E -1 R O U G E -2 R O U G E -L M E T E O R C H R F + + B E R T S C O R E 0.0 0.2 0.4 0.6 0.8 1.0 score Metrics ...
Evaluate What You Can't Evaluate: Unassessable Quality for Generated Response
LLMs (large language models) like ChatGPT have demonstrated exceptional language comprehension and generation abilities. While reference-free evaluators grounded in LLMs exhibit superior human alignment compared to traditional reference-based evaluators, the utilization of such evaluators poses several challenges. Refe...
Yongkang Liu; Shi Feng; Daling Wang; Yifei Zhang; Hinrich Schütze
[ { "figure_caption": "Figure 1 :1Figure 1: Examples of ChatGPT assigning high probabilities for unreasonable responses. The correct response semantic for this example is unique. The reference response is I checked and her birthday should be June 6, 1975.", "figure_data": "", "figure_id": "fig_0", "fi...
[{"Category": "Methodological Basis", "Citation": "(Papineni et al., 2002)", "Explanation": "The cited work by Papineni et al. (2002) introduces the BLEU metric, which is a reference-based evaluation metric for assessing the quality of response generation in the context of the citing paper."}, {"Category": "Methodologi...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b10", "b14", "b2", "b17", "b1", "b6", "b18", "b8" ], "table_ref": [], "text": "The goal of information extraction (IE) is to learn structure from unstructured documents. Exist...
2023-11-17
10.3115/v1/P15-1034
[ { "authors": "Gabor Angeli; Melvin Jose ; Johnson Premkumar; Christopher D Manning", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Leveraging linguistic structure for open domain information extraction", "year": "2015" }, { "authors": "David Bamman; No...
[]
InteractiveIE: Towards Assessing the Strength of Human-AI Collaboration in Improving the Performance of Information Extraction
Learning template-based information extraction (IE) from documents is a crucial yet difficult task. Prior template-based IE approaches assume foreknowledge of the domain's templates. However, many real-world IE scenarios do not have pre-defined schemas. To "figureout-as you go" requires a solution with zero or minimal ...
Ishani Mondal; Michelle Yuan; Aparna Garimella; Francis Ferraro; Andrew Blair-Stanek; Benjamin Van Durme; Jordan Boyd-Graber
[ { "figure_caption": "Figure 1 :1Figure 1: shows Human-AI interactions in InteractiveIE which consists of three main components: Preprocessing View, Explorer View and Document-Level Cluster view. Through this interface, the humans would be able to modify the bird's eye view of a corpus by altering the questions ...
[{"Category": "Methodological Basis", "Citation": "(Li et al., 2022)", "Explanation": "The cited work by Li et al. provides a method for understanding patterns and behaviors in the world, which the citing paper adopts to help analysts in their work."}, {"Category": "Methodological Basis", "Citation": "(M\u00f3ra et al....
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b12", "b10", "b7", "b22", "b15", "b28", "b3" ], "table_ref": [], "text": "As the volume of scientific publishing increases, it is becoming crucial to develop more sophisticated anal...
2023-05-24
10.18653/v1/D19-1371
[ { "authors": "", "journal": "Iz", "ref_id": "b0", "title": "", "year": "" }, { "authors": "Kyle Beltagy; Arman Lo; Cohan", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "SciB-ERT: A pretrained language model for scientific text", "year"...
[ { "formula_coordinates": [ 12, 104.07, 330.16, 118.83, 25.58 ], "formula_id": "formula_0", "formula_text": "-(x 1 , x 2 , . . . x n ); -q ϕ (z|x) = N (z; µ, σ 2 I)." } ]
Complex Mathematical Symbol Definition Structures: A Dataset and Model for Coordination Resolution in Definition Extraction
Mathematical symbol definition extraction is important for improving scholarly reading interfaces and scholarly information extraction (IE). However, the task poses several challenges: math symbols are difficult to process as they are not composed of natural language morphemes; and scholarly papers often contain senten...
Anna Martin-Boyle; Andrew Head; Kyle Lo; Risham Sidhu; Marti A Hearst; Dongyeop Kang
[ { "figure_caption": "Figure 5 :5Figure 5: (a) The macro F1 score based on the number of symbols in the sample, and (b) the difference in scores calculated by subtracting baseline F1 scores from TaDDEx.", "figure_data": "", "figure_id": "fig_1", "figure_label": "5", "figure_type": "figure" }, ...
[{"Category": "Methodological Basis", "Citation": "(Head et al., 2021)", "Explanation": "The cited work provides a reading interface (ScholarPhi) that could be used for mathematical symbol definition extraction, which the citing paper adopts as a potential use case for the task of symbol definition extraction."}, {"Cat...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b28", "b14", "b37", "b40", "b24", "b13", "b39", "b49" ], "table_ref": [], "text": "Annotator disagreement is a common challenge in NLP (Leonardelli et al., 2021;Fornaciari et ...
10.18653/v1/2020.alw-1.21
[ { "authors": "Sohail Akhtar; Valerio Basile; Viviana Patti", "journal": "", "ref_id": "b0", "title": "Whose opinions matter? perspective-aware models to identify opinions of hate speech victims in abusive language detection", "year": "2021" }, { "authors": "Al Hala; Maximilian Kuwatly; G...
[ { "formula_coordinates": [ 3, 99.95, 743.25, 160.11, 33.71 ], "formula_id": "formula_0", "formula_text": "θ * = arg max θ E i=1 log P (y i |x i , a i ; θ)" }, { "formula_coordinates": [ 3, 350.38, 591.52, 129.3, 31 ], "formu...
You Are What You Annotate: Towards Better Models through Annotator Representations
Annotator disagreement is ubiquitous in natural language processing (NLP) tasks. There are multiple reasons for such disagreements, including the subjectivity of the task, difficult cases, unclear guidelines, and so on. Rather than simply aggregating labels to obtain data annotations, we instead try to directly model t...
Naihao Deng; Xinliang Frederick Zhang; Siyang Liu; Winston Wu; Lu Wang; Rada Mihalcea
[ { "figure_caption": "Ea : Text embedding and weighted annotation embedding.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": ", etc. For instance, during the collection process of the CommitmentBank dataset, De Marneffe et al. ...
[{"Category": "Supporting Evidence", "Citation": "(Leonardelli et al., 2021)", "Explanation": "The cited work by Leonardelli et al. provides a discussion on the common challenge of annotator disagreement in NLP, which serves as a foundational point for the citing paper to build upon in its own research."}, {"Category":...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b6", "b20", "b31", "b29" ], "table_ref": [], "text": "3D pose estimation and object classification are challenging but important tasks in computer vision. In realworld applications such as autonomous...
2023-06-05
[ { "authors": "Yutong Bai; Angtian Wang; Adam Kortylewski; Alan Yuille", "journal": "", "ref_id": "b0", "title": "Coke: Localized contrastive learning for robust keypoint detection", "year": "2020" }, { "authors": "Eran Borenstein; Shimon Ullman", "journal": "IEEE Trans. Pattern Anal....
[ { "formula_coordinates": [ 3, 117.11, 455.59, 72.16, 11.23 ], "formula_id": "formula_0", "formula_text": "C = {C r ∈ R c } R" }, { "formula_coordinates": [ 3, 57.56, 600.98, 228.8, 20.14 ], "formula_id": "formula_1", "fo...
Robust 3D-aware Object Classification via Discriminative Render-and-Compare
In real-world applications, it is essential to jointly estimate the 3D object pose and class label of objects, i.e., to perform 3D-aware classification. While current approaches for either image classification or pose estimation can be extended to 3D-aware classification, we observe that they are inherently limited: 1)...
Artur Jesslen; Guofeng Zhang; Angtian Wang; Alan Yuille; Adam Kortylewski
[ { "figure_caption": "Figure 1 .1Figure 1. 3D-aware classification of a partially occluded car (a). Neither a feed-forward neural network (b) nor a render-andcompare approach (c) produce satisfying results as they only predict one task correctly but fail at the other. Our RCNet model (d)handles both tasks robust...
[{"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work introduces a render-and-compare approach for 3D pose estimation, which the citing paper adopts in their study of 3D-aware object classification."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited ...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11" ], "table_ref": [], "text": "Video super-resolution (VSR) is the process of changing from low-resolution (LR) video to high-resolution (HR) video. Currently, VSR is divided into traditional VSR and real-world VSR...
2024-01-01
[ { "authors": "Y Blau; R Mechrez; R Timofte; T Michaeli; L Zelnik-Manor", "journal": "", "ref_id": "b0", "title": "The 2018 PIRM challenge on perceptual image super-resolution", "year": "2018" }, { "authors": "K C Chan; X Wang; K Yu; C Dong; C C Loy", "journal": "", "ref_id": "b1"...
[ { "formula_coordinates": [ 3, 381.94, 432.46, 176.06, 9.65 ], "formula_id": "formula_0", "formula_text": "x = M • x i + (1 -M ) • x j ,(1)" }, { "formula_coordinates": [ 3, 382.97, 449.86, 175.03, 9.65 ], "formula_id": "form...
NegVSR: Augmenting Negatives for Generalized Noise Modeling in Real-world Video Super-Resolution
The capability of video super-resolution (VSR) to synthesize high-resolution (HR) video from ideal datasets has been demonstrated in many works. However, applying the VSR model to real-world video with unknown and complex degradation remains a challenging task. First, existing degradation metrics in most VSR methods ar...
Yexing Song; Meilin Wang; Zhijing Yang; Xiaoyu Xian; Yukai Shi
[ { "figure_caption": "Figure 1 :1Figure 1: The overview of the proposed NegVSR. (a) Our approach initially extracts noise sequence N sq through window sequence C in an unsupervised manner. The motion of C occurs within the OOD video noise dataset V od . Subsequently, it mixes N sq and LR video V lr to create nov...
[{"Category": "Methodological Basis", "Citation": "(Chan et al. 2022b)", "Explanation": "The cited work by Chan et al. provides a classification of VSR into traditional and real-world VSR, which the citing paper adopts in its research to better understand the VSR process and the challenges associated with it."}, {"Cate...
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b13", "b35" ], "table_ref": [], "text": "Diffusion models (Sohl-Dickstein et al., 2015b;Ho et al., 2020;Song et al., 2020) have shown remarkable performance in image generation and attracted h...
2023-06-14
[ { "authors": "Jacob Austin; Daniel D Johnson; Jonathan Ho; Daniel Tarlow; Rianne Van Den; Jacob Berg; Daniel D Austin; Jonathan Johnson; Daniel Ho; Rianne Tarlow; Van Den; Berg", "journal": "", "ref_id": "b0", "title": "Structured denoising diffusion models in discrete state-spaces", "year": "...
[ { "formula_coordinates": [ 2, 325.57, 166.65, 194.46, 19.34 ], "formula_id": "formula_0", "formula_text": "q(x t |x t-1 ) = N (x t ; 1 -β t • x t-1 ; β t I) (1)" }, { "formula_coordinates": [ 2, 351.69, 230.82, 168.34, 26.03 ], ...
A Survey of Diffusion Models in Natural Language Processing
This survey paper provides a comprehensive review of the use of diffusion models in natural language processing (NLP). Diffusion models are a class of mathematical models that aim to capture the diffusion of information or signals across a network or manifold. In NLP, diffusion models have been used in a variety of app...
Hao Zou; Zae Myung Kim; Dongyeop Kang
[ { "figure_caption": "Figure 1 :1Figure 1: The yearly number of both published and preprinted papers on diffusion models for NLP. For year 2023, the blue bar shows the number collected until the end of April 2023, and the dashed gray bar shows the estimated number for the whole year.", "figure_data": "", ...
[{"Category": "Methodological Basis", "Citation": "(Sohl-Dickstein et al., 2015b)", "Explanation": "The cited work by Sohl-Dickstein et al. provides a foundational methodology for diffusion models in the field of image generation, which the citing paper builds upon in the context of natural language processing."}, {"Ca...
[ { "figure_ref": [ "fig_5", "fig_5", "fig_5" ], "heading": "Introduction", "publication_ref": [ "b44", "b43", "b26", "b28", "b27", "b49", "b52", "b15", "b62", "b62", "b11", "b28", "b23", "b0", ...
2023-05-24
[ { "authors": "Max Bain; Arsha Nagrani; Gül Varol; Andrew Zisserman", "journal": "", "ref_id": "b0", "title": "Frozen in time: A joint video and image encoder for end-to-end retrieval", "year": "2021" }, { "authors": "Yogesh Balaji; Martin Renqiang Min; Bing Bai; Rama Chellappa; Hans Pete...
[ { "formula_coordinates": [ 5, 114.4, 384.42, 390.26, 45.04 ], "formula_id": "formula_0", "formula_text": "Q = {(m i , y (i,t) )|i = 1, 2, . . . , H•W } pairs from the n-th view n = 1, 2, . . . , N = {(m (i,n) , y (i,n,t) = √ ᾱy (i,n,0) + √ 1 -ᾱt ϵ i )|i=1, 2, . . . , H•...
T1: Scaling Diffusion Probabilistic Fields to High-Resolution on Unified Visual Modalities
Diffusion Probabilistic Field (DPF) [63] models the distribution of continuous functions defined over metric spaces. While DPF shows great potential for unifying data generation of various modalities including images, videos, and 3D geometry, it does not scale to a higher data resolution. This can be attributed to the ...
Kangfu Mei; Mo Zhou; Vishal M Patel
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of the field models' capability of modeling visual content distributions. The underlying data distribution is simplified into the 1-D space for demonstration. The score network learns the distribution through the attention among coordinate-signal pairs, whi...
[{"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work on diffusion models provides the forward and backward processes that the citing paper adopts in their image generation model."}, {"Category": "Extension or Continuation", "Citation": "[29,28]", "Explanation": "The cited work on mul...
[ { "figure_ref": [ "fig_0" ], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b7", "b9", "b10", "b11", "b12", "b13", "b1", "b3", "b7", "b8", "b10", ...
2023-07-25
10.1145/3511808.3557289
[ { "authors": "H Zhang; E Yuan; W Guo; Z He; J Qin; H Guo; B Chen; X Li; R Tang", "journal": "ACM", "ref_id": "b0", "title": "Disentangling past-future modeling in sequential recommendation via dual networks", "year": "2022" }, { "authors": "B Hidasi; A Karatzoglou; L Baltrunas; D Tikk", ...
[ { "formula_coordinates": [ 3, 311.98, 173.06, 251.06, 22.49 ], "formula_id": "formula_0", "formula_text": "X u = x 1 → x 2 → • • • → x |X u | , chronologically records |X u |" }, { "formula_coordinates": [ 3, 396.76, 309.12, 166.28, ...
TriMLP: Revenge of a MLP-like Architecture in Sequential Recommendation
Sequential recommenders concentrate on modeling the transmission patterns shrouded in sequences of historical user-item interactive behaviors (or referred as token) and inferring dynamic preferences over candidate items. Fueled by diverse advanced neural network architectures like RNN, CNN and Transformer, existing met...
Yiheng Jiang; Yuanbo Xu; Yongjian Yang; Funing Yang; Pengyang Wang; Hui Xiong
[ { "figure_caption": "Fig. 1 .1Fig.1. Accuracy/Efficiency traded-off on QB-Video. Along the vertical axis, the higher, the better recommendation performance; along the horizontal axis, the more left, the less inference cost.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_...
[{"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work introduces the concept of sequential recommendation and provides a framework for understanding the process of mining dependencies among tokens and inferring preferences over time, which the citing paper builds upon in its research o...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b29", "b2", "b32", "b24", "b10", "b0" ], "table_ref": [], "text": "Generalization to unseen tasks has been explored and investigated on zero-/few-shot NLP tasks by performing multi-task l...
2023-05-24
10.1109/CVPR.2017.670
[ { "authors": "Jean-Baptiste Alayrac; Jeff Donahue; Pauline Luc; Antoine Miech; Iain Barr; Yana Hasson; Karel Lenc; Arthur Mensch; Katie Millican; Malcolm Reynolds", "journal": "", "ref_id": "b0", "title": "Flamingo: a visual language model for few-shot learning", "year": "2022" }, { "aut...
[ { "formula_coordinates": [ 5, 70.87, 240.81, 101.04, 10.63 ], "formula_id": "formula_0", "formula_text": "{(V 1 , l 1 ), ..., (V m , l m )}" }, { "formula_coordinates": [ 6, 77.48, 74.77, 440.32, 18.97 ], "formula_id": "form...
GRILL: Grounded Vision-language Pre-training via Aligning Text and Image Regions
Generalization to unseen tasks is an important ability for few-shot learners to achieve better zero-/few-shot performance on diverse tasks. However, such generalization to visionlanguage tasks including grounding and generation tasks has been under-explored; existing few-shot VL models struggle to handle tasks that inv...
Woojeong Jin; Subhabrata Mukherjee; Yu Cheng; Yelong Shen; Weizhu Chen; Ahmed Hassan Awadallah; Damien Jose; Xiang Ren
[ { "figure_caption": "Figure 2 :2Figure2: Illustration of GRILL. Our model is a sequence-to-sequence transformer that uses a vision transformer (ViT)(Dosovitskiy et al., 2021;Liu et al., 2021) to process images with patch embeddings, where each patch represents a fixed-size region of the image. We replace the re...
[{"Category": "Methodological Basis", "Citation": "(Sanh et al., 2021)", "Explanation": "The cited work by Sanh et al. provides a method of performing multi-task learning with task-specific prompts, which the citing paper adopts to explore generalization to unseen tasks in zero-/few-shot NLP tasks."}, {"Category": "Ext...
[ { "figure_ref": [ "fig_1" ], "heading": "INTRODUCTION", "publication_ref": [ "b8", "b19", "b34", "b4", "b22", "b27", "b5", "b28", "b15", "b3", "b39", "b30", "b38", "b33", "b16", "b41", "b12", ...
2023-08-11
10.1145/3583780.3614999
[ { "authors": "Åke Björck", "journal": "Handbook of numerical analysis", "ref_id": "b0", "title": "Least squares methods", "year": "1990" }, { "authors": "C John; Butcher", "journal": "J. Comput. Appl. Math", "ref_id": "b1", "title": "Numerical methods for ordinary differenti...
[ { "formula_coordinates": [ 2, 96.2, 99.07, 176.81, 44.93 ], "formula_id": "formula_0", "formula_text": "𝑥 ! 𝑥 !\"# 𝑥 !\"$ 𝑥 !\"% 𝑥 !\"& 𝑒 ! 𝑒 !\"# 𝑒 !\"$ 𝑒 !\"% 𝑒 !\"&" }, { "formula_coordinates": [ 3, 96.17, 212.59, 198.41, ...
Optimal Linear Subspace Search: Learning to Construct Fast and High-Quality Schedulers for Diffusion Models
In recent years, diffusion models have become the most popular and powerful methods in the field of image synthesis, even rivaling human artists in artistic creativity. However, the key issue currently limiting the application of diffusion models is its extremely slow generation process. Although several methods were p...
Zhongjie Duan; Chengyu Wang; Cen Chen; Jun Huang; Weining Qian
[ { "figure_caption": "Figure 1 :1Figure1: The complete generation process of diffusion models consists of hundreds of steps for gradual denoising. Diffusion schedulers speed up this process by skipping some steps but may make destructive changes to images.", "figure_data": "", "figure_id": "fig_1", "...
[{"Category": "Methodological Basis", "Citation": "[40]", "Explanation": "The cited work highlights the slow sampling procedure in diffusion models, which the citing paper uses as a basis for discussing the limitations of the framework in terms of practicability."}, {"Category": "Methodological Basis", "Citation": "[31...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b4", "b18", "b6", "b7", "b9", "b26", "b15", "b30", "b33", "b29", "b33", "b19", "b19", "b16", "b24", "b25", "b14", "b3", "...
2023-11-15
10.1145/3442188.3445922
[ { "authors": "Emily M Bender; Timnit Gebru; Angelina Mcmillan-Major; Shmargaret Shmitchell", "journal": "Association for Computing Machinery", "ref_id": "b0", "title": "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?", "year": "2021" }, { "authors": "Yoav Benjamini;...
[]
Emergent Inabilities? Inverse Scaling Over the Course of Pretraining
Does inverse scaling only occur as a function of model size, or can it also occur over the course of training? We carry out an exploratory study investigating whether the performance of language models on specific tasks can decrease (while general performance remains high) during training on the language modeling task....
James A Michaelov; Benjamin K Bergen
[ { "figure_caption": "Figure 1 :1Figure1: Performance of the 8 Pythia(Biderman et al., 2023) models at 8 stages over the course of training at the two multiple-choice variants of TRUTHFULQA(Lin et al., 2022) and the 10 multiple-choice winners of the Inverse Scaling Prize(McKenzie et al., 2023b).", "figure_da...
[{"Category": "Supporting Evidence", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. provides evidence that increased number of model parameters and training dataset size positively impact model performance, which supports the claim made in the citing paper that bigger is usually bett...
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b24", "b1", "b13", "b34", "b38", "b3", "b40", "b14", "b6", "b31" ], "table_ref": [], "text": "Text-based question-answering datasets derive answ...
2023-05-24
10.18653/v1/2020.emnlp-main.19
[ { "authors": "Joshua Ainslie; Santiago Ontanon; Chris Alberti; Vaclav Cvicek; Zachary Fisher; Philip Pham; Anirudh Ravula; Sumit Sanghai; Qifan Wang; Li Yang", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "ETC: Encoding long and structured inputs in transformers",...
[ { "formula_coordinates": [ 5, 355.06, 286.04, 66.61, 10.63 ], "formula_id": "formula_0", "formula_text": "h q = BERT(Q)," }, { "formula_coordinates": [ 5, 318.85, 302.58, 206.29, 27.17 ], "formula_id": "formula_1", "form...
TACR: A Table-alignment-based Cell-selection and Reasoning Model for Hybrid Question-Answering
Hybrid Question-Answering (HQA), which targets reasoning over tables and passages linked from table cells, has witnessed significant research in recent years. A common challenge in HQA and other passage-table QA datasets is that it is generally unrealistic to iterate over all table rows, columns, and linked passages to...
Jian Wu; Yicheng Xu; Yan Gao; Jian-Guang Lou; Börje F Karlsson; Manabu Okumura
[ { "figure_caption": "Figure 1 :1Figure 1: Example from the HybridQA dataset. The top sentence is the original question, and words in different colors show different parts of questions required for reasoning in different modalities. the two headers in blue-dashed boxes are column names aligned with the given que...
[{"Category": "Data Source", "Citation": "(Rajpurkar et al., 2016)", "Explanation": "The cited work by Rajpurkar et al. (2016) provides a text-based question-answering dataset that is used as a foundational element in the citing paper for deriving answers based on reasoning over given passages."}, {"Category": "Data So...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b3", "b21", "b25", "b4", "b21", "b17", "b23", "b19", "b25", "b24", "b4", "b24", "b27", "b22", "b9", "b18", "b13", "b2", "...
2023-05-24
[ { "authors": "Chris Burges; Tal Shaked; Erin Renshaw; Ari Lazier; Matt Deeds; Nicole Hamilton; Greg Hullender", "journal": "", "ref_id": "b0", "title": "Learning to rank using gradient descent", "year": "2005" }, { "authors": "N Craswell; B Mitra; E Yilmaz; D Campos", "journal": "", ...
[ { "formula_coordinates": [ 2, 399.45, 353.73, 87.27, 10.63 ], "formula_id": "formula_0", "formula_text": "s i = f (q, d i , r i , D)." }, { "formula_coordinates": [ 2, 306.14, 559.09, 219.12, 17.74 ], "formula_id": "formula_...
Fusion-in-T5: Unifying Document Ranking Signals for Improved Information Retrieval
Common IR pipelines are typically cascade systems that may involve multiple rankers and/or fusion models to integrate different information step-by-step. In this paper, we propose a novel re-ranker named Fusion-in-T5 (FiT5), which integrates document text information, retrieval features, and global document information...
Shi Yu; Chenghao Fan; Chenyan Xiong; David Jin; Zhiyuan Liu; Zhenghao Liu
[ { "figure_caption": "Figure 1 :1Figure 1: Architecture of Fusion-in-T5. The query, document, and ranking feature are filled in the input template to form the input. We use retrieval score as the ranking feature.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figu...
[{"Category": "Methodological Basis", "Citation": "(Croft et al., 2010)", "Explanation": "The cited work by Croft et al. provides a foundational definition of the information retrieval task, which the citing paper uses to frame its research on building models for the task."}, {"Category": "Data Source", "Citation": "(Y...
[ { "figure_ref": [ "fig_0", "fig_2" ], "heading": "Introduction", "publication_ref": [ "b7", "b21", "b19", "b26", "b21", "b31", "b26", "b11", "b14", "b3", "b13", "b22", "b25", "b24" ], "table_ref":...
2023-11-02
[ { "authors": "P L Bartlett; M I Jordan; J D Mcauliffe", "journal": "Journal of the American Statistical Association", "ref_id": "b0", "title": "Convexity, classification, and risk bounds", "year": "2006" }, { "authors": "S Ben-David; J Blitzer; K Crammer; F Pereira", "journal": "", ...
[ { "formula_coordinates": [ 2, 245.05, 265.03, 259.61, 9.96 ], "formula_id": "formula_0", "formula_text": "R(f ) = E pte(x,y) [ℓ(f (x), y)],(1)" }, { "formula_coordinates": [ 2, 308.24, 295.31, 99.5, 14.32 ], "formula_id": "f...
Generalizing Importance Weighting to A Universal Solver for Distribution Shift Problems
Distribution shift (DS) may have two levels: the distribution itself changes, and the support (i.e., the set where the probability density is non-zero) also changes. When considering the support change between the training and test distributions, there can be four cases: (i) they exactly match; (ii) the training suppor...
Tongtong Fang; Nan Lu; Gang Niu; Masashi Sugiyama
[ { "figure_caption": "Figure 1 :1Figure 1: An illustration of the relationship between the training support and the test support.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "This is binary classification to distinguish re...
[{"Category": "Methodological Basis", "Citation": "(Quionero-Candela et al., 2009)", "Explanation": "The cited work by Quionero-Candela et al. (2009) is used to highlight the issue of distribution shift in deep supervised classification, which can lead to poor generalization in practice."}, {"Category": "Methodological...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b241", "b253", "b182", "b241", "b253", "b253", "b110", "b111", "b82", "b126", "b260", "b84", "b276", "b58", "b59", "b227", "b72", ...
2023-05-24
10.1201/b17320
[ { "authors": "E A Abioye; M S Z Abidin; M S A Mahmud; S Buyamin; M H I Ishak; M K I Abd Rahman; A O Otuoze; P Onotu; M S A Ramli", "journal": "Computers and Electronics in Agriculture", "ref_id": "b0", "title": "A review on monitoring and advanced control strategies for precision irrigation", "y...
[ { "formula_coordinates": [ 4, 59.36, 334.63, 229.31, 24.14 ], "formula_id": "formula_0", "formula_text": "min 𝜃 𝜆 𝑙 ⋅ ∑ (𝐱,𝑦)∈ 𝐿  𝑠𝑢𝑝 (𝐱, 𝑦, 𝜃)+𝜆 𝑢 ⋅ ∑ 𝐱∈ 𝑈  𝑢𝑛𝑠𝑢𝑝 (𝐱, 𝜃), (1)" }, { "formula_coordinates": [ 5, 331.51, ...
Label-Efficient Learning in Agriculture: A Comprehensive Review
The past decade has witnessed many great successes of machine learning (ML) and deep learning (DL) applications in agricultural systems, including weed control, plant disease diagnosis, agricultural robotics, and precision livestock management. Despite tremendous progresses, one downside of such ML/DL models is that th...
Jiajia Li; Dong Chen; Xinda Qi; Zhaojian Li; Yanbo Huang; Daniel Morris; Xiaobo Tan
[ { "figure_caption": "Figure 1 :1Figure 1: The PRISMA guideline flowchart used in this review. The figure first row illustrates initially selected articles based on the keywords that enhanced the initial filtering before other exclusion criteria are applied.", "figure_data": "", "figure_id": "fig_0", ...
[{"Category": "Methodological Basis", "Citation": "(Walter et al., 2017)", "Explanation": "The cited work by Walter et al. (2017) provides a comprehensive overview of the use of information and communication technologies in smart farming, which serves as a methodological basis for the citing paper to discuss the integr...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b24", "b2", "b40", "b0", "b33", "b28", "b8", "b25", "b26", "b13", "b18", "b16" ], "table_ref": [], "text": "Deep neural networks (DNN) are ubiquitously...
2023-05-24
10.3115/v1/D14-1181
[ { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Ch...
[ { "formula_coordinates": [ 2, 492.26, 95.85, 6.27, 47.15 ], "formula_id": "formula_0", "formula_text": "… … … …" }, { "formula_coordinates": [ 3, 70.87, 732.81, 220.08, 14.63 ], "formula_id": "formula_1", "formula_text":...
SELFOOD: Self-Supervised Out-Of-Distribution Detection via Learning to Rank
Deep neural classifiers trained with crossentropy loss (CE loss) often suffer from poor calibration, necessitating the task of out-ofdistribution (OOD) detection. Traditional supervised OOD detection methods require expensive manual annotation of in-distribution and OOD samples. To address the annotation bottleneck, we...
Dheeraj Mekala; Adithya Samavedhi; Chengyu Dong; Jingbo Shang
[ { "figure_caption": "Figure 1 :1Figure 1: CE Loss and SELFOOD optimization for two documents D 1 , D 2 belonging to Sports and Arts. CE loss increases the scores corresponding to the Sports class for D 1 and Arts class for D 2 , implying an intradocument comparison. Instead, SELFOOD compares the softmax scores ...
[{"Category": "Methodological Basis", "Citation": "(Liu et al., 2019)", "Explanation": "The cited work by Liu et al. provides a deep neural network (DNN) model for text classification, which the citing paper adopts as a method for their research on OOD detection."}, {"Category": "Data Source", "Citation": "(Devlin et a...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b40", "b28", "b57", "b59", "b49", "b60", "b2", "b11", "b7", "b50", "b38", "b39", "b37", "b1", "b46", "b39", "b17", "b10", ...
2023-10-28
10.18653/v1/N19-1264
[ { "authors": "Kristjan Arumae; Fei Liu", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Guiding extractive summarization with question-answering rewards", "year": "2019" }, { "authors": "Yuntao Bai; Andy Jones; Kamal Ndousse; Amanda Askell; Anna Chen; N...
[ { "formula_coordinates": [ 2, 198.41, 672.36, 62.9, 15.86 ], "formula_id": "formula_0", "formula_text": "(n) 1 , S (n) 2 )} N n=1" }, { "formula_coordinates": [ 2, 348.71, 734.77, 176.43, 36.25 ], "formula_id": "formula_1", ...
DecipherPref: Analyzing Influential Factors in Human Preference Judgments via GPT-4
Human preference judgments are pivotal in guiding large language models (LLMs) to produce outputs that align with human values. Human evaluations are also used in summarization tasks to compare outputs from various systems, complementing existing automatic metrics. Despite their significance, however, there has been li...
Yebowen Hu; Kaiqiang Song; Sangwoo Cho; Xiaoyang Wang; Hassan Foroosh; Fei Liu
[ { "figure_caption": "Figure 3 :3Figure 3: Two prompts for GPT-4 to assess whether a given summary is fluent (TOP) or clear (BOTTOM).", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Two prompts for GPT-4 ...
[{"Category": "Methodological Basis", "Citation": "(Papineni et al., 2002)", "Explanation": "The cited work by Papineni et al. (2002) introduces the BLEU and ROUGE metrics for comparing system summaries with reference texts based on word overlap, which serves as a methodological basis for the citing paper in evaluating...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b48", "b43", "b51", "b3", "b27", "b33", "b37", "b27", "b44", "b9", "b11", "b22", "b3" ], "table_ref": [], "text": "The recent years have witnesse...
2023-07-05
[ { "authors": "Vamsi Aribandi; Yi Tay; Tal Schuster; Jinfeng Rao; Huaixiu Steven Zheng; Sanket Vaibhav Mehta; Honglei Zhuang; Dara Vinh Q Tran; Jianmo Bahri; Ni", "journal": "", "ref_id": "b0", "title": "Ext5: Towards extreme multi-task scaling for transfer learning", "year": "2021" }, { ...
[ { "formula_coordinates": [ 5, 153.57, 210.28, 66.8, 13.18 ], "formula_id": "formula_0", "formula_text": "10 ¤110" } ]
Mixture-of-Experts Meets Instruction Tuning: A Winning Combination for Large Language Models
Sparse Mixture-of-Experts (MoE) is a neural architecture design that can be utilized to add learnable parameters to Large Language Models (LLMs) without increasing inference cost. Instruction tuning is a technique for training LLMs to follow instructions. We advocate combining these two approaches, as we find that MoE ...
Sheng Shen; Le Hou; Yanqi Zhou; Nan Du; Shayne Longpre; Jason Wei; Hyung Won; Barret Zoph; William Fedus; Xinyun Chen; Tu Vu; Yuexin Wu; Wuyang Chen; Albert Webson; Yunxuan Li; Vincent Zhao; Hongkun Yu; Kurt Keutzer; Trevor Darrell; Denny Zhou; † Google
[ { "figure_caption": "Figure 2 :2Figure 2: Average zero performance of FLAN-MOE models versus FLAN-T5 dense models for similar effective FLOPs per token over the 57 MMLU tasks and 23 BBH tasks.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "f...
[{"Category": "Methodological Basis", "Citation": "[49]", "Explanation": "The cited work introduces the transformer-based language models that the citing paper builds upon to enhance their performance in NLP tasks."}, {"Category": "Extension or Continuation", "Citation": "[44,52,4,28,34,38]", "Explanation": "The cited ...
[ { "figure_ref": [ "fig_1", "fig_1" ], "heading": "I. INTRODUCTION", "publication_ref": [ "b1", "b0", "b2", "b4", "b2", "b6", "b0", "b7", "b7" ], "table_ref": [], "text": "Egocentric videos are captured from a first-person ...
2023-07-26
[ { "authors": "K C Chan; S Zhou; X Xu; C C Loy", "journal": "", "ref_id": "b0", "title": "Investigating tradeoffs in real-world video super-resolution", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b1", "title": "Tips for extended gopro battery life + accessori...
[ { "formula_coordinates": [ 3, 326.87, 623.39, 236.17, 31.8 ], "formula_id": "formula_0", "formula_text": "I (t) motion = t+⌊N/2⌋ j=t-⌊N/2⌋ r • I (j) clear + (1 -N • r) • I (t) clear (1)" }, { "formula_coordinates": [ 4, 90.05, 389.09, 209...
EgoVSR: Towards High-Quality Egocentric Video Super-Resolution
Due to the limitations of capture devices and scenarios, egocentric videos frequently have low visual quality, mainly caused by high compression and severe motion blur. With the increasing application of egocentric videos, there is an urgent need to enhance the quality of these videos through super-resolution. However,...
Yichen Chi; Junhao Gu; Jiamiao Zhang; Wenming Yang; Yapeng Tian
[ { "figure_caption": "VSR results from our method (top) and Real-BasicVSR[1] (bottom).", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 1 :1Fig. 1: Egocentric videos generally contain temporally changing visual scenes with...
[{"Category": "Supporting Evidence", "Citation": "[2]", "Explanation": "The cited work provides information on the limitations of wearable cameras in terms of image quality and battery life, which supports the discussion on the challenges of egocentric VSR in the citing paper."}, {"Category": "Methodological Basis", "C...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b40", "b13", "b10", "b23", "b19", "b34", "b17", "b31", "b0", "b22", "b4", "b7", "b41", "b36", "b11", "b25", "b32", "b43", ...
2023-11-03
10.1145/3442188.3445922
[ { "authors": "Sandhini Agarwal; Gretchen Krueger; Jack Clark; Alec Radford; Jong Wook Kim; Miles Brundage", "journal": "", "ref_id": "b0", "title": "Evaluating clip: towards characterization of broader capabilities and downstream implications", "year": "2021" }, { "authors": "Peter Ander...
[ { "formula_coordinates": [ 3, 315.02, 737.28, 209.99, 35.68 ], "formula_id": "formula_0", "formula_text": "AccG,C = 1 N N i=1 1[S(c good i , ri, Ii) > S(c bad i , ri, Ii)],(1)" }, { "formula_coordinates": [ 4, 120.2, 569.29, 169.53, ...
Gender Biases in Automatic Evaluation Metrics for Image Captioning
Model-based evaluation metrics (e.g., CLIP-Score and GPTScore) have demonstrated decent correlations with human judgments in various language generation tasks. However, their impact on fairness remains largely unexplored. It is widely recognized that pretrained models can inadvertently encode societal biases, thus empl...
Haoyi Qiu; Zi-Yi Dou; Tianlu Wang; Asli Celikyilmaz; Nanyun Peng
[ { "figure_caption": "Good caption: a woman who is reading Bad caption: a man who is reading Reference: a photo of a woman who is reading CLIPScore 0", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Example...
[{"Category": "Supporting Evidence", "Citation": "(Zhang et al., 2019)", "Explanation": "The cited work by Zhang et al. (2019) introduces the BERTScore evaluation metric, which has shown promising performance in terms of correlation with human judgments in generation tasks. This work provides foundational evidence for ...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b14", "b10", "b27", "b13", "b24", "b6", "b6", "b10", "b1", "b13", "b27", "b14", "b8", "b27", "b13", "b25", "b18", "b0", "...
[ { "authors": "Z Allen-Zhu; Y Li; Z Song", "journal": "", "ref_id": "b0", "title": "A convergence theory for deep learning via overparameterization", "year": "2019" }, { "authors": "M Arjovsky; S Chintala; L Bottou", "journal": "", "ref_id": "b1", "title": "Wasserstein generat...
[ { "formula_coordinates": [ 3, 169.9, 169.99, 334.71, 39.72 ], "formula_id": "formula_0", "formula_text": "f θ S is dF (Q θ S , P0) = sup g∈F E S lim sup m→∞ 1 m m j=1 g(z j , S) -Ex∼P 0 [g(x, S)] ,(1)" }, { "formula_coordinates": [ 3, 134.47, 2...
On the Generalization of Diffusion Model
The diffusion probabilistic generative models are widely used to generate highquality data. Though they can synthetic data that does not exist in the training set, the rationale behind such generalization is still unexplored. In this paper, we formally define the generalization of the generative model, which is measure...
Mingyang Yi; Jiacheng Sun; Zhenguo Li
[ { "figure_caption": "Figure 1 :1Figure 1: The first figure is the averaged distance ∥x t -x * t ∥ per dimension (3×32×32) over 50k samples of generated CIFAR10. The second figure randomly samples a batch of x t and x * t with the same x T = x * T and T = 50.", "figure_data": "", "figure_id": "fig_0", ...
[{"Category": "Methodological Basis", "Citation": "(Somepalli et al., 2022)", "Explanation": "The cited work provides empirical evidence of the diffusion model generating data that is combined with parts of the training set, which the citing paper uses to highlight the need for extrapolating in the application of the d...
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b32", "b1", "b31", "b47", "b26", "b7", "b35", "b35", "b24", "b6" ], "table_ref": [], "text": "For safe autonomous driving, predicting a vehicle's future trajectory...
2023-05-24
[ { "authors": "Inhwan Bae; Jin-Hwi Park; Hae-Gon Jeon", "journal": "", "ref_id": "b0", "title": "Non-probability sampling network for stochastic human trajectory prediction", "year": "2022" }, { "authors": "Alexander Barth; Uwe Franke", "journal": "IEEE", "ref_id": "b1", "titl...
[ { "formula_coordinates": [ 4, 133.12, 93.32, 354.01, 138.5 ], "formula_id": "formula_0", "formula_text": "Future GT Trajectory (𝐱𝐱 + ) Lane Graph 𝓖𝓖 Interaction edge 𝐳𝐳 e -/+ Interaction Feature (𝐡𝐡 𝑅𝑅 ) F Trajectory Samples (𝐘𝐘) Intention Feature ( 𝐡𝐡 𝐼�...
LEVERAGING FUTURE RELATIONSHIP REASONING FOR VEHICLE TRAJECTORY PREDICTION
Understanding the interaction between multiple agents is crucial for realistic vehicle trajectory prediction. Existing methods have attempted to infer the interaction from the observed past trajectories of agents using pooling, attention, or graph-based methods, which rely on a deterministic approach. However, these me...
Daehee Park; Hobin Ryu; Yunseo Yang; Jegyeong Cho; Jiwon Kim; Kuk-Jin Yoon
[ { "figure_caption": "FigureFigure 2: Lane segments represented in different colors.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "al. (2014)) or VAE (Kingma & Welling (2013)) have been employed to address this issue. GAN-ba...
[{"Category": "Methodological Basis", "Citation": "(Lin et al., 2000)", "Explanation": "The cited work by Lin et al. (2000) provides a heuristic method for predicting a vehicle's future trajectory, which the citing paper adopts as a foundational approach for early trajectory prediction models."}, {"Category": "Methodol...
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b11", "b16", "b20", "b42", "b4", "b21", "b18", "b26" ], "table_ref": [], "text": "Advances in multilingual natural language processing (NLP) technologies (D...
2023-05-24
10.1162/tacl_a_00416
[ { "authors": "David Ifeoluwa Adelani; Jade Abbott; Graham Neubig; D' Daniel; Julia Souza; Constantine Kreutzer; Chester Lignos; Happy Palen-Michel; Shruti Buzaaba; Sebastian Rijhwani; Stephen Ruder; Israel Mayhew; Shamsuddeen H Abebe Azime; Chris Chinenye Muhammad; Joyce Emezue; Perez Nakatumba-Nabende; Aremu O...
[ { "formula_coordinates": [ 3, 339.65, 120.03, 185.49, 24.32 ], "formula_id": "formula_0", "formula_text": "u l = performance l theoretical max performance (1)" }, { "formula_coordinates": [ 3, 376.94, 349.88, 148.2, 29.59 ], ...
GlobalBench: A Benchmark for Global Progress in Natural Language Processing
Despite the major advances in NLP, significant disparities in NLP system performance across languages still exist. Arguably, these are due to uneven resource allocation and sub-optimal incentives to work on less resourced languages. To track and further incentivize the global development of equitable language technolog...
Yueqi Song; Catherine Cui; Simran Khanuja; Pengfei Liu; Fahim Faisal; Alissa Ostapenko; Genta Indra Winata; Alham Fikri; Samuel Cahyawijaya; Yulia Tsvetkov; Antonios Anastasopoulos; Graham Neubig
[ { "figure_caption": "Figure 1 :1Figure 1: GlobalBench Design: A leaderboard for each task is separately maintained. Each leaderboard contains a multi-faceted evaluation of submitted systems, along with a ranking of the most under-served languages. More details can be found in Section 2.", "figure_data": "",...
[{"Category": "Extension or Continuation", "Citation": "(Dabre et al., 2020)", "Explanation": "The cited work by Dabre et al. (2020) is an important contribution to the field of multilingual NLP, and the citing paper extends the research by exploring the possibilities of NLP systems that benefit all people around the w...
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b46", "b14", "b40", "b52", "b80", "b10", "b59", "b55", "b26", "b16", "b73", "b58", "b74", "b18" ], "table_ref": [], "text": "Pretrained (Ra...
2024-03-26
10.18653/v1/2021.emnlp-main.397
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "K training instances containing responses that abruptly end in a colon", "year": "" }, { "authors": " Werra", "journal": "", "ref_id": "b1", "title": "We use the huggingface TRL", "year": "2020" }, { "autho...
[ { "formula_coordinates": [ 3, 212.52, 464.59, 292.15, 22.24 ], "formula_id": "formula_0", "formula_text": "J(θ) = max θ x∈X d πref (x) y∈Y R(x, y, ⋆)π θ (y|x) (1)" }, { "formula_coordinates": [ 3, 140.42, 515.67, 364.24, 20.24 ]...
LEFTOVER-LUNCH: ADVANTAGE-BASED OFFLINE REINFORCEMENT LEARNING FOR LANGUAGE MODELS
Reinforcement Learning with Human Feedback (RLHF) is the most prominent method for Language Model (LM) alignment. However, RLHF is an unstable and data-hungry process that continually requires new high-quality LM-generated data for finetuning. We introduce Advantage-Leftover Lunch RL (A-LOL), a new class of offline pol...
Ashutosh Baheti; Ximing Lu; Ronan Le Bras; Mark Riedl
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of Advantage-Leftover Lunch RL in practice. We first supervised finetune the reference policy (π ref ) on the training data as a precursor to A-LOL training. Then, an external reward model is employed to train the value estimate layer (V πref ) on frozen π ...
[{"Category": "Methodological Basis", "Citation": "(Touvron et al., 2023b)", "Explanation": "The cited work introduces the use of RLHF in finetuning LMs, which serves as a methodological basis for the citing paper to build upon in their research on improving the quality and safety of LMs."}, {"Category": "Extension or ...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b3", "b32", "b1", "b6" ], "table_ref": [], "text": "U.S. Supreme Court justices hear some of the most important cases in the country, resolving disagreements among lower courts, adjudicating the cons...
2023-05-24
10.1016/j.artint.2020.103387
[ { "authors": "Bryant Michael L Anderson; Shuda Lee; Jon Li; Ben Go; Luoyan Sutandio; Zhou", "journal": "", "ref_id": "b0", "title": "On the Types, Frequency, Uses and Characteristics of Metalanguage in Conversation", "year": "2006" }, { "authors": "Katie Atkinson; Trevor Bench-Capon; Da...
[]
CuRIAM: Corpus re Interpretation and Metalanguage in U.S. Supreme Court Opinions
Most judicial decisions involve the interpretation of legal texts; as such, judicial opinion requires the use of language as a medium to comment on or draw attention to other language. Language used this way is called metalanguage. We develop an annotation schema for categorizing types of legal metalanguage and apply o...
Michael Kranzlein; Nathan Schneider; Kevin Tobia
[ { "figure_caption": "(4) ...the term \"violation\" referred to [ D the \"[a]ct or instance of violating, or state of being violated].\" Webster's New International Dictionary 2846 (2d ed. 1949) (Webster's Second).", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "fig...
[{"Category": "Data Source", "Citation": "(Tobia, 2021)", "Explanation": "The cited work is a source of information for furthering legal and linguistic scholarship on judicial interpretation, which the citing paper may use as a reference for its own research."}, {"Category": "Data Source", "Citation": "(Go\u017ad\u017a...
[ { "figure_ref": [ "fig_0" ], "heading": "", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b11", "b12", "b10", ...
2023-10-21
10.1080/01431168908903939
[ { "authors": "A Singh", "journal": "International Journal of Remote Sensing", "ref_id": "b0", "title": "Review article digital change detection techniques using remotely-sensed data", "year": "1989" }, { "authors": "H Chen; Z Shi", "journal": "Remote Sensing", "ref_id": "b1", ...
[ { "formula_coordinates": [ 5, 93.78, 531.85, 206.24, 12.73 ], "formula_id": "formula_1", "formula_text": "I hr s , I lr s = swap(I hr d , I lr u , u, v, crop size),(3)" }, { "formula_coordinates": [ 5, 48.96, 552.69, 251.06, 20.91 ...
Continuous Cross-resolution Remote Sensing Image Change Detection
Most contemporary supervised Remote Sensing (RS) image Change Detection (CD) approaches are customized for equal-resolution bitemporal images. Real-world applications raise the need for cross-resolution change detection, aka, CD based on bitemporal images with different spatial resolutions. Given training samples of a ...
Hao Chen; Haotian Zhang; Keyan Chen; Chenyao Zhou; Song Chen; Zhengxia Zou; Zhenwei Shi; = 𝜙; 𝛿𝑥 𝑞; {𝒁; 𝑥 𝑞; C S 𝑞; } {𝒁; 𝑥 𝑞
[ { "figure_caption": "Fig. 1 .1Fig. 1. Illustration of continuous cross-resolution change detection, i.e., CD towards varying resolution difference ratios between the HR image and the relatively LR image.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" },...
[{"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work on convolutional neural networks (CNNs) is used as a methodological basis for the change detection techniques in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[7]", "Explanation": "The cited work on vision t...
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b29", "b41", "b42", "b52", "b34", "b28", "b59", "b23", "b43", "b58", "b10", "b0" ], "table_ref": [], "text": "Entity linking aims to...
2023-05-24
10.48550/arxiv.2207.04108
[ { "authors": "Omar Adjali; Romaric Besançon; Olivier Ferret; Hervé Le Borgne; Brigitte Grau", "journal": "Springer", "ref_id": "b0", "title": "Multimodal entity linking for tweets", "year": "2020-04-14" }, { "authors": "Tom Ayoola; Joseph Fisher; Andrea Pierleoni", "journal": "", ...
[ { "formula_coordinates": [ 8, 184.79, 392.02, 82.08, 15.24 ], "formula_id": "formula_0", "formula_text": "K-1 j=0 exp(s t (r, e j ))" }, { "formula_coordinates": [ 8, 106.17, 597.65, 150.02, 33.3 ], "formula_id": "formula_1"...
AMELI: Enhancing Multimodal Entity Linking with Fine-Grained Attributes
We propose attribute-aware multimodal entity linking, where the input is a mention described with a text and image, and the goal is to predict the corresponding target entity from a multimodal knowledge base (KB) where each entity is also described with a text description, a visual image and a set of attributes and val...
Barry Menglong; Yu Chen; Qifan Wang; Sijia Wang; Minqian Liu; Zhiyang Xu; ♠ Licheng; Yu ♡ Lifu
[ { "figure_caption": ":Figure 1 :1Figure 1: An example for our attribute-aware multimodal entity linking. Left: review text and image; Right: product title, image, description, and attributes. In order to link the mention ASUS laptop to the target entity, we need to be aware of the attributes, e.g., memory and S...
[{"Category": "Methodological Basis", "Citation": "(Onoe and Durrett, 2020)", "Explanation": "The cited work by Onoe and Durrett (2020) provides a methodological basis for the citing paper by introducing techniques for entity linking in text."}, {"Category": "Methodological Basis", "Citation": "(Zhang et al., 2021b)", ...
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b18", "b49", "b11", "b10", "b44", "b13", "b24", "b33", "b31" ], "table_ref": [], "text": "Deep language representations have become the dominant form ...
2023-06-01
10.1016/j.inffus.2019.12.012
[ { "authors": "Stefano Baccianella; Andrea Esuli; Fabrizio Sebastiani", "journal": "European Language Resources Association (ELRA", "ref_id": "b0", "title": "SentiWordNet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining", "year": "2010" }, { "authors": "Alejandr...
[ { "formula_coordinates": [ 4, 70.87, 586.86, 218.27, 41.1 ], "formula_id": "formula_0", "formula_text": "S c i = {s j,c i } m j=1 associated with each category c i ∈ C, where C = {c i } d i=1 ." }, { "formula_coordinates": [ 4, 70.87, 643.01, ...
SENTECON: Leveraging Lexicons to Learn Human-Interpretable Language Representations
Although deep language representations have become the dominant form of language featurization in recent years, in many settings it is important to understand a model's decisionmaking process. This necessitates not only an interpretable model but also interpretable features. In particular, language must be featurized i...
Victoria Lin; Louis-Philippe Morency
[ { "figure_caption": "Figure 1 :1Figure 1: A comparison of lexicon-based language representations and SENTECON. While lexicons encode wordlevel category counts, SENTECON parses whole sentences and encodes sentence-level category intensities.", "figure_data": "", "figure_id": "fig_0", "figure_label": ...
[{"Category": "Methodological Basis", "Citation": "(Lin et al., 2020)", "Explanation": "The cited work by Lin et al. provides a method for understanding the relationship between language patterns and specific outcomes, which the citing paper builds upon in their research on affective computing, computational social sci...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b63", "b12", "b58", "b70", "b29", "b88", "b85", "b43", "b83", "b90", "b96", "b31", "b46", "b8", "b55", "b77", "b54", "b67", ...
2023-11-09
10.18653/v1/2021.acl-long.551
[ { "authors": "Ahmed Abdelali; Sabit Hassan; Hamdy Mubarak; Kareem Darwish; Younes Samih", "journal": "", "ref_id": "b0", "title": "Pre-training bert on arabic tweets: Practical considerations", "year": "2021" }, { "authors": "Muhammad Abdul-Mageed; Abdelrahim Elmadany; El Moatez; Billah ...
[ { "formula_coordinates": [ 4, 94.45, 173.79, 403.59, 29.62 ], "formula_id": "formula_0", "formula_text": "M² R K K R K S Lev. R M R M R R ARETA R K D R" }, { "formula_coordinates": [ 4, 251.72, 359.79, 262.9, 413.59 ], "form...
Advancements in Arabic Grammatical Error Detection and Correction: An Empirical Investigation
Grammatical error correction (GEC) is a wellexplored problem in English with many existing models and datasets. However, research on GEC in morphologically rich languages has been limited due to challenges such as data scarcity and language complexity. In this paper, we present the first results on Arabic GEC using two...
Bashar Alhafni; Inoue † Go; Christian Khairallah; Nizar Habash
[ { "figure_caption": "Figure 1 :1Figure1: An example showing the differences between the alignments of the M 2 scorer, a standard Levenshtein distance, ARETA, and our proposed algorithm. The edit operations are keep (K), replace (R), insert (I), delete (D), merge (M), and split (S). Dotted lines between the erro...
[{"Category": "Supporting Evidence", "Citation": "(Ng et al., , 2014;;Bryant et al., 2019)", "Explanation": "The cited works provide a foundation for the organization of shared tasks in English grammatical error correction, which is a key factor in the development of SOTA GEC systems."}, {"Category": "Extension or Cont...
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b3", "b21", "b25", "b19", "b32", "b28", "b30", "b0", "b28", "b19" ], "table_ref": [], "text": "Language models (LMs) are remarkably effective in...
10.18653/v1/2020.emnlp-main.749
[ { "authors": "Sid Black; Gao Leo; Phil Wang; Connor Leahy; Stella Biderman", "journal": "", "ref_id": "b0", "title": "GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow", "year": "2021" }, { "authors": "Shuyang Cao; Lu Wang", "journal": "", "ref_id": "b1",...
[ { "formula_coordinates": [ 2, 116.48, 437.17, 127.04, 28.95 ], "formula_id": "formula_0", "formula_text": "y t ∼ p θ (y t | c, x, y <t ) ∝ exp logit θ (y t | c, x, y <t )" }, { "formula_coordinates": [ 2, 319.87, 89.02, 190.32, 43.5...
Trusting Your Evidence: Hallucinate Less with Context-aware Decoding
Language models (LMs) often struggle to pay enough attention to the input context, and generate texts that are unfaithful or contain hallucinations. To mitigate this issue, we present context-aware decoding (CAD), which follows a contrastive output distribution that amplifies the difference between the output probabili...
Weijia Shi; Xiaochuang Han; Mike Lewis; Yulia Tsvetkov; Luke Zettlemoyer; Scott Yih
[ { "figure_caption": "Figure 1 :1Figure 1: An illustration of context-aware decoding.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: OPT models of varying sizes consistently benefit from CAD. The x-axis ...
[{"Category": "Methodological Basis", "Citation": "(Chan et al., 2022)", "Explanation": "The cited work provides a framework for understanding the role of context knowledge in language model generation, which the citing paper builds upon in their study of the balance between prior and context knowledge."}, {"Category":...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11", "b29", "b27", "b19", "b42", "b28", "b40" ], "table_ref": [], "text": "Using natural language for image generation and manipulation is a straightforward and intuitive approach ...
2023-06-05
[ { "authors": "Rameen Abdal; Yipeng Qin; Peter Wonka", "journal": "", "ref_id": "b0", "title": "Image2stylegan: How to embed images into the stylegan latent space", "year": "2019" }, { "authors": "Rameen Abdal; Yipeng Qin; Peter Wonka", "journal": "", "ref_id": "b1", "title": ...
[ { "formula_coordinates": [ 4, 312.88, 616.65, 154.96, 17.17 ], "formula_id": "formula_0", "formula_text": "q (x t | x t-1 ) := N √ 1 -β t x t-1 , β t I" }, { "formula_coordinates": [ 5, 211.94, 90.83, 292.73, 23.61 ], "formu...
ChatFace: Chat-Guided Real Face Editing via Diffusion Latent Space Manipulation
Can you add a little smile to the person? I'll make this one look a little happier. Please add more smiles. Okay, smile is set to medium. I want to make the girl
Dongxu Yue; Qin Guo; Munan Ning; Jiaxi Cui; Yuesheng Zhu; Li Yuan
[ { "figure_caption": "Figure 2 :2Figure 2: Overview of ChatFace inference pipeline. Large language model parsing queries from user for solving facial image editing tasks, which then enable the activation of corresponding facial attributes and control over the editing strength in diffusion semantic latent space."...
[{"Category": "Supporting Evidence", "Citation": "[12]", "Explanation": "The cited work on Generative Adversarial Networks (GANs) provides the foundational approach for image generation and manipulation in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[30]", "Explanation": "The cited work on CLI...
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b0", "b2", "b3", "b4", "b5" ], "table_ref": [], "text": "Currency is a universally accepted medium of exchange that enables the trade of goods and services. Typically, i...
[ { "authors": "", "journal": "Investopedia", "ref_id": "b0", "title": "Currency", "year": "2022-07-22" }, { "authors": "", "journal": "CFI team", "ref_id": "b1", "title": "Currency", "year": "2019-11-14" }, { "authors": " Fxssi", "journal": "", "ref_id": "b...
[ { "formula_coordinates": [ 4, 366.65, 727.27, 176.16, 19.28 ], "formula_id": "formula_0", "formula_text": "𝑋 𝑛𝑜𝑟𝑚 = 𝑋 𝑐𝑢𝑟𝑟𝑒𝑛𝑡 -𝑋 𝑚𝑖𝑛 𝑋 𝑚𝑎𝑥 -𝑋 𝑚𝑖𝑛 (1)" } ]
Applications of Machine Learning in Detecting Afghan Fake Banknotes
Fake currency, unauthorized imitation money lacking government approval, constitutes a form of fraud. Particularly in Afghanistan, the prevalence of fake currency poses significant challenges and detrimentally impacts the economy. While banks and commercial establishments employ authentication machines, the public lack...
Hamida Ashna; Ziaullah Momand
[ { "figure_caption": "Fig. 1 .1Fig. 1. Flow of proposed method", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. AFN banknote cropped two main features", "figure_data": "", "figure_id": "fig_1", "fi...
[{"Category": "Data Source", "Citation": "[1]", "Explanation": "The cited work provides a historical overview of the evolution of currency, including its origins in Ancient Egypt and the current form of currency in use today."}, {"Category": "Data Source", "Citation": "[2]", "Explanation": "The cited work highlights th...
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b36", "b12", "b30", "b15", "b17" ], "table_ref": [], "text": "Large language models (LLMs) have been shown to hallucinate, meaning that...
2023-05-24
10.18653/v1/N19-1300
[ { "authors": "Hussam Alkaissi; I Samy; Mcfarlane", "journal": "Cureus", "ref_id": "b0", "title": "Artificial hallucinations in chatgpt: implications in scientific writing", "year": "2023" }, { "authors": "Yejin Bang; Samuel Cahyawijaya; Nayeon Lee; Wenliang Dai; Dan Su; Bryan Wilie; Hol...
[ { "formula_coordinates": [ 3, 319.65, 600.42, 205.36, 26.84 ], "formula_id": "formula_0", "formula_text": "scoreT (C, a) = 1 n -1 n i=2 1(C tf (i) = \"true\"), (1)" } ]
Mastering the ABCDs of Complex Questions: Answer-Based Claim Decomposition for Fine-grained Self-Evaluation
When answering complex questions, large language models (LLMs) may produce answers that do not satisfy all criteria of the question. While existing self-evaluation techniques aim to detect if such answers are correct, these techniques are unable to determine which criteria of the question are satisfied by the generated...
Nishant Balepur; Jie Huang; Samraj Moorjani; Hari Sundaram; Kevin Chen; Chuan Chang; Philip K Dick; George Mccaffrey; John Robert Rozanov
[ { "figure_caption": "Figure 1 :1Figure 1: Using answer-based claim decomposition to verify ChatGPT's answer to an OBSCUREQA question.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Claim 1 : 3 :Figure 3 :133Figure 3: Qualit...
[{"Category": "Supporting Evidence", "Citation": "(Alkaissi and McFarlane, 2023)", "Explanation": "The cited work provides evidence that LLMs are capable of generating untruthful statements, which is a key factor in the discussion of LLMs and their potential to cause harm in decision-making."}, {"Category": "Supporting...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b75", "b41", "b41", "b17", "b61", "b31", "b65", "b24" ], "table_ref": [], "text": "Metrics that capture human-like attributes of dialog agents can help inform dialog agents th...
2023-09-16
10.1073/pnas.2218523120
[ { "authors": "Muhammad Abdul-Mageed; Anneke Buffone; Hao Peng; Salvatore Giorgi; Johannes C Eichstaedt; Lyle H Ungar", "journal": "", "ref_id": "b0", "title": "Recognizing pathogenic empathy in social media", "year": "2017" }, { "authors": "Gavin Abercrombie; Amanda Cercas Curry; Tanvi D...
[]
Psychological Metrics for Dialog System Evaluation
We present metrics for evaluating dialog systems through a psychologically-grounded "human" lens in which conversational agents express a diversity of both states (e.g., emotion) and traits (e.g., personality), just as people do. We present five interpretable metrics from established psychology that are fundamental to ...
Salvatore Giorgi; Shreya Havaldar; Ahmed ‡ Farhan; Zuhaib Akhtar; Shalaka Vaidya; Gary Pan; Lyle H Ungar; H Andrew Schwartz; João Sedoc
[ { "figure_caption": "Figure 1 :1Figure1: Mauve score (traditional metric) and emotion matching (psychological metric) to evaluate two conversation snippets (turns). Humans rated the top response as highly appropriate and the bottom response as inappropriate. The dialog agent's response in both conversations rec...
[{"Category": "Methodological Basis", "Citation": "(Chen et al., 2021)", "Explanation": "The cited work by Chen et al. provides a discussion on the limitations of traditional automatic metrics in evaluating open-domain dialog systems, which the citing paper builds upon to highlight the need for a more comprehensive app...
[ { "figure_ref": [ "fig_0", "fig_1", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b38", "b39", "b42", "b37", "b18", "b52", "b7", "b28", "b1", "b23", "b45", "b50", "b51", "b27" ], ...
2023-07-30
[ { "authors": "Rahaf Aljundi; Francesca Babiloni; Mohamed Elhoseiny; Marcus Rohrbach; Tinne Tuytelaars", "journal": "", "ref_id": "b0", "title": "Memory aware synapses: Learning what (not) to forget", "year": "2018" }, { "authors": "Rahaf Aljundi; Punarjay Chakravarty; Tinne Tuytelaars", ...
[ { "formula_coordinates": [ 3, 62.4, 97.44, 466.18, 195.72 ], "formula_id": "formula_0", "formula_text": "ℒ clf Dataset D i Module i B-E-E-R-S F i Stage-II : Multi-lingual modeling Rehearsal Set M i ℋ 1 Frozen F 1 F 2 F i φ 1 φ 2 φ i TDR•東京 ℒ clf" }, { "formula_c...
MRN: Multiplexed Routing Network for Incremental Multilingual Text Recognition
Multilingual text recognition (MLTR) systems typically focus on a fixed set of languages, which makes it difficult to handle newly added languages or adapt to everchanging data distribution. In this paper, we propose the Incremental MLTR (IMLTR) task in the context of incremental learning (IL), where different language...
Tianlun Zheng; Zhineng Chen; Bingchen Huang; Wei Zhang; Yu-Gang Jiang
[ { "figure_caption": "Figure 1 .1Figure 1. Incremental multilingual text recognition (IMLTR) focuses on the practical scenario where different languages are introduced sequentially. The goal is to accurately recognize the newly introduced language while maintaining high recognition accuracy for previously seen l...
[{"Category": "Methodological Basis", "Citation": "[39,40,43,38,19,53]", "Explanation": "The cited works provide deep learning methods that have improved the accuracy of scene text recognition, which the citing paper leverages in its research on the task of reading text in natural scenes."}, {"Category": "Extension or ...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b5", "b11", "b16", "b4", "b14", "b5", "b10", "b3", "b0", "b23", "b24", "b26", "b24", "b27" ], "table_ref": [], "text": "In recent years, Na...
10.18653/v1/d15-1075
[ { "authors": "Armen Aghajanyan; Akshat Shrivastava; Anchit Gupta; Naman Goyal; Luke Zettlemoyer; Sonal Gupta", "journal": "", "ref_id": "b0", "title": "Better fine-tuning by reducing representational collapse", "year": "2021" }, { "authors": "R Samuel; Gabor Bowman; Christopher Angeli; C...
[ { "formula_coordinates": [ 3, 136.94, 485, 152.79, 19.75 ], "formula_id": "formula_0", "formula_text": "θt+1 = θt -η ∂L (θt) ∂θt (1)" }, { "formula_coordinates": [ 3, 105.34, 628.32, 184.4, 47.56 ], "formula_id": "formula_1"...
Bi-Drop: Enhancing Fine-tuning Generalization via Synchronous sub-net Estimation and Optimization
Pretrained language models have achieved remarkable success in natural language understanding. However, fine-tuning pretrained models on limited training data tends to overfit and thus diminish performance. This paper presents Bi-Drop, a fine-tuning strategy that selectively updates model parameters using gradients fro...
Shoujie Tong; Heming Xia; Damai Dai; Runxin Xu; Tianyu Liu; Binghuai Lin; Yunbo Cao; Zhifang Sui
[ { "figure_caption": "Figure 2 :2Figure 2: An overall illustration of Bi-Drop. Bi-Drop splits each training step into three sub-steps: (1) Multiple forward propagations: each mini-batch sample goes through the forward pass multiple times (denoted as k) with dropout;(2) sub-net selection: an advanced strategy is ...
[{"Category": "Methodological Basis", "Citation": "(Phang et al., 2018)", "Explanation": "The cited work by Phang et al. (2018) provides foundational data and insights on the challenges of maintaining generalization performance in fine-tuning methods, which the citing paper builds upon to address the issue of overfitti...
[ { "figure_ref": [ "fig_2" ], "heading": "Introduction", "publication_ref": [ "b22", "b55", "b71", "b36", "b59", "b30", "b72", "b77", "b33", "b7", "b14", "b58", "b36", "b32" ], "table_ref": [], "text...
2023-10-10
10.1007/978-3-031-19815-1_11
[ { "authors": "Youngmin Baek; Bado Lee; Dongyoon Han; Sangdoo Yun; Hwalsuk Lee", "journal": "", "ref_id": "b0", "title": "Character region awareness for text detection", "year": "2019" }, { "authors": "Leilani Battle; Peitong Duan; Zachery Miranda; Dana Mukusheva; Remco Chang; Michael Sto...
[]
UniChart: A Universal Vision-language Pretrained Model for Chart Comprehension and Reasoning
Charts are widely used for data analysis, providing visual representations and insights into complex data. To facilitate chart-based data analysis using natural language, several downstream tasks have been introduced recently such as chart question answering and chart summarization. However, existing methods for these ...
Ahmed Masry; Parsa Kavehzadeh; Xuan Long; Enamul Hoque; Shafiq Joty
[ { "figure_caption": "Sum two leftmost values of gray line </answer> <open-ended question answering> What is the birth rate in the U.S. from 2005 to 2019? </answer> 90.0", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Characte...
[{"Category": "Methodological Basis", "Citation": "(Hoque et al., 2022)", "Explanation": "The cited work provides a general introduction to the use of information visualizations in data analysis, which serves as a methodological basis for the citing paper in understanding the importance of charts in data analysis."}, {...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b61", "b58", "b27", "b4", "b3", "b2", "b62", "b44", "b27", "b31", "b55", "b4", "b27", "b55", "b29", "b40" ], "table_ref": [], "...
10.18653/v1/D19-6004
[ { "authors": "Simon Baron-Cohen; Alan M Leslie; Uta Frith", "journal": "Cognition", "ref_id": "b0", "title": "Does the autistic child have a \"theory of mind", "year": "1985" }, { "authors": "Simon Baron-Cohen; Michelle O 'riordan; Valerie Stone; Rosie Jones; Kate Plaisted", "journal...
[]
Clever Hans or Neural Theory of Mind? Stress Testing Social Reasoning in Large Language Models
The escalating debate on AI's capabilities warrants developing reliable metrics to assess machine "intelligence." Recently, many anecdotal examples were used to suggest that newer large language models (LLMs) like ChatGPT and GPT-4 exhibit Neural Theory-of-Mind (N-ToM); however, prior work reached conflicting conclusio...
Natalie Shapira; Mosh Levy; Seyed Hossein Alavi; Xuhui Zhou; Yejin Choi; Yoav Goldberg; Maarten Sap; Vered Shwartz
[ { "figure_caption": "FalseBelief. In the false-belief examples from Kosinski (2023), the protagonist's belief about the content of the container is different from its actual contents. The examples are variants of the corresponding original tests from psychology, e.g. the unexpected contents examples are variant...
[{"Category": "Supporting Evidence", "Citation": "(2022)", "Explanation": "The cited work by (2022) shows that LLMs lack the ability to demonstrate N-ToM, which is a key finding that supports the claims made in the citing paper about the limitations of LLMs in this area."}, {"Category": "Extension or Continuation", "Ci...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18" ], "t...
2023-10-19
10.1109/TBDATA.2019.2921572
[ { "authors": " Openai", "journal": "OpenAI", "ref_id": "b0", "title": "", "year": "2023" }, { "authors": "Le Teven; Angela Scao; Christopher Fan; Ellie Akiki; Suzana Pavlick; Daniel Ilic; Roman Hesslow; Alexandra Castagné; François Sasha Luccioni; Matthias Yvon; Jonathan Gallé; Alexander...
[ { "formula_coordinates": [ 2, 70.87, 155.9, 441.11, 146.95 ], "formula_id": "formula_0", "formula_text": "< Threshold, Continue! Complement Complement … … … … … Beam 𝒊 + 𝟏 Figure 1:" }, { "formula_coordinates": [ 4, 284.4, 287.25, 143.5...
ALLIES: Prompting Large Language Model with Beam Search
With the advance of large language models (LLMs), the research field of LLM applications becomes more and more popular and the idea of constructing pipelines to accomplish complex tasks by stacking LLM API calls come true. However, this kind of methods face two limitations: narrow information coverage and low fault tol...
Hao Sun; Xiao Liu; Yeyun Gong; Yan Zhang; Daxin Jiang; Linjun Yang; Nan Duan
[ { "figure_caption": "Figure 2 :2Figure 2: The abstract process of ALLIES.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Performance comparison w.r.t. hyperparameters on NQ dataset.", "figure_data":...
[{"Category": "Methodological Basis", "Citation": "[Brown et al., 2020]", "Explanation": "The cited work by Brown et al. introduces the in-context learning method, which the citing paper adopts to generate responses and answer queries using large language models."}, {"Category": "Extension or Continuation", "Citation":...
[ { "figure_ref": [ "fig_1" ], "heading": "Introduction", "publication_ref": [ "b44", "b25", "b13", "b30", "b12", "b26", "b55", "b25", "b37", "b7", "b28", "b23", "b20", "b22", "b21", "b32", "b4"...
2023-05-24
[ { "authors": "Alaaeldin Ali; Hugo Touvron; Mathilde Caron; Piotr Bojanowski; Matthijs Douze; Armand Joulin; Ivan Laptev; Natalia Neverova; Gabriel Synnaeve; Jakob Verbeek", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b0", "title": "Xcit: Cross-covariance image transfor...
[ { "formula_coordinates": [ 4, 88.61, 166.31, 406.27, 188.29 ], "formula_id": "formula_0", "formula_text": "H × W × 3 ! \" × # \" × D ! ! $ × # $ × 𝐷 \" ! %& × # %& × 𝐷 # ! '( × # '( × 𝐷 $ Dual Attention Block Multi-Head Partition Attention 𝐻 % × 𝑊 % × 𝐷 % 𝑘×𝑘 DW...
Dual Path Transformer with Partition Attention
This paper introduces a novel attention mechanism, called dual attention, which is both efficient and effective. The dual attention mechanism consists of two parallel components: local attention generated by Convolutional Neural Networks (CNNs) and long-range attention generated by Vision Transformers (ViTs). To addres...
Zhengkai Jiang; Liang Liu; Jiangning Zhang; Yabiao Wang; Mingang Chen; Chengjie Wang
[ { "figure_caption": "Figure 2 .2Figure 2. Parameters vs. ImageNet Accuracy. DualFormers outperform state-of-the-art Vision Transformers while having fewer parameters and FLOPs. The model names, T, XS, S, and B, denote tiny, extra-small, small, and base, respectively.", "figure_data": "", "figure_id": "f...
[{"Category": "Methodological Basis", "Citation": "[26]", "Explanation": "The cited work on Transformers provides the foundational basis for the use of Vision Transformers (ViTs) in architecture design for computer vision tasks in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanatio...
[ { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Introduction", "publication_ref": [ "b3", "b52", "b54", "b53", "b40", "b12", "b6", "b32", "b11", "b42", "b23", "b38", "b44", "b2", "b35", "b38...
2023-11-14
10.18653/v1/N19-1300
[ { "authors": "Meysam Alizadeh; Fabrizio Gilardi; Emma Hoes; Jonathan Klüser; Mael Kubli; Nahema Marchal", "journal": "Journal of Quantitative Description: Digital Media", "ref_id": "b0", "title": "Content moderation as a political issue: The twitter discourse around trump's ban", "year": "2022" ...
[ { "formula_coordinates": [ 6, 119.43, 572.75, 76.24, 14.27 ], "formula_id": "formula_0", "formula_text": "s ′ ki = f (r ki , e ki )." }, { "formula_coordinates": [ 7, 104.71, 583.18, 151.77, 30.56 ], "formula_id": "formula_1...
Using Natural Language Explanations to Rescale Human Judgments
The rise of large language models (LLMs) has brought a critical need for high-quality humanlabeled data, particularly for processes like human feedback and evaluation. A common practice is to label data via consensus annotation over crowdworker judgments. However, annotators' judgments for subjective tasks can differ i...
Manya Wadhwa Jifan; Chen Junyi; Jessy Li; Greg Durrett
[ { "figure_caption": "Figure 1 :1Figure1: Overview of our method. By feeding explanations that annotators write into an LLM, we can rescale their coarse-grained judgment to a 100-point scale.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "fig...
[{"Category": "Supporting Evidence", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) highlights the shift towards more subjective tasks in the field of language models, which is a key focus of the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Ouyang et al.,...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b17", "b21", "b19", "b0", "b15", "b42", "b15", "b5", "b45", "b35", "b46", "b29", "b44", "b8", "b47", "b35", "b15" ], "table_r...
10.5281/zenodo.5297715
[ { "authors": "Jacob Austin; Daniel D Johnson; Jonathan Ho; Daniel Tarlow; Rianne Van Den; Berg", "journal": "", "ref_id": "b0", "title": "Structured denoising diffusion models in discrete state-spaces", "year": "2021" }, { "authors": "Nikita Balagansky; Daniil Gavrilov", "journal": ...
[ { "formula_coordinates": [ 2, 328.22, 166.06, 176.08, 32.7 ], "formula_id": "formula_0", "formula_text": "0 = logits-initialization(w c:c+B ) wc:c+B t = √ ᾱt wc:c+B 0 + √ 1 -ᾱt ϵ" }, { "formula_coordinates": [ 2, 317.57, 359.6, 195.41, ...
David helps Goliath: Inference-Time Collaboration Between Small Specialized and Large General Diffusion LMs
Diffusion-based language models are emerging as a promising alternative to autoregressive LMs: they approach the competence of autoregressive LMs while offering nuanced controllability at inference time. While autoregressive LMs have benefited immensely from scaling and instruction-based learning, existing studies of d...
Xiaochuang Han; Sachin Kumar; Yulia Tsvetkov; Marjan Ghazvininejad
[ { "figure_caption": "Figure 2 :2Figure 2: Training and decoding algorithms for SSD-2. The training algorithm describes the training objective at an arbitrary context length c. The decoding algorithm can be applied multiple rounds by appending the generation from one round to the context for the next. The decodi...
[{"Category": "Methodological Basis", "Citation": "(Ho et al., 2020)", "Explanation": "The cited work by Ho et al. (2020) serves as a methodological basis for diffusion-based generative models in continuously valued data such as images, audio, and video, which the citing paper extends to discrete text data."}, {"Catego...
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b0", "b23", "b36", "b1", "b28", "b10", "b9", "b50", "b6", "b31", "b6" ], "table_ref": [], "text": "Tools to support research activities of...
2023-12-01
10.1145/3589955
[ { "authors": "Tal August; Lucy Lu Wang; Jonathan Bragg; Marti A Hearst; Andrew Head; Kyle Lo", "journal": "ACM Trans. Comput.-Hum. Interact. Just Accepted", "ref_id": "b0", "title": "Paper plain: Making medical research papers approachable to healthcare consumers with natural language processing", ...
[]
A Question Answering Framework for Decontextualizing User-facing Snippets from Scientific Documents
Many real-world applications (e.g., note taking, search) require extracting a sentence or paragraph from a document and showing that snippet to a human outside of the source document. Yet, users may find snippets difficult to understand as they lack context from the original document. In this work, we use language mode...
Benjamin Newman; Luca Soldaini; Raymond Fok; Arman Cohan; Kyle Lo
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of two user-facing scenarios requiring snippet decontextualization. (Top) A citation graph explorer surfacing citation context snippets to explain relationships between papers. (Bottom) An AI research assistant providing snippets as attributions. Highlighte...
[{"Category": "Supporting Evidence", "Citation": "(August et al., 2023)", "Explanation": "The cited work by August et al. provides a specific example of how snippets can be used to help readers understand technical documents more efficiently."}, {"Category": "Supporting Evidence", "Citation": "(Fok et al., 2023b)", "Ex...
[ { "figure_ref": [ "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b24", "b56", "b9", "b45", "b62", "b35", "b40", "b11", "b26", "b35", "b26", "b46", "b43", "b37", "b73", "b31", "b30...
2023-10-09
10.1609/aaai.v35i3.16370
[ { "authors": "Davide Abati; Jakub Tomczak; Tijmen Blankevoort; Simone Calderara; Rita Cucchiara; Babak Ehteshami Bejnordi", "journal": "", "ref_id": "b0", "title": "Conditional channel gated networks for task-aware continual learning", "year": "2020" }, { "authors": "Akiko Aizawa", "...
[ { "formula_coordinates": [ 4, 372.67, 154.97, 97.65, 9.65 ], "formula_id": "formula_0", "formula_text": "Q [θ ∈ R α (Q)] ≥ 1 -α." }, { "formula_coordinates": [ 4, 213.38, 572.21, 48.03, 14.11 ], "formula_id": "formula_1", ...
IBCL: ZERO-SHOT MODEL GENERATION FOR TASK TRADE-OFFS IN CONTINUAL LEARNING
Like generic multi-task learning, continual learning has the nature of multiobjective optimization, and therefore faces a trade-off between the performance of different tasks. That is, to optimize for the current task distribution, it may need to compromise performance on some previous tasks. This means that there exis...
Pengyuan Lu; Michele Caprio; Eric Eaton; Insup Lee
[ { "figure_caption": "Figure 1 :1Figure 1: The IBCL workflow. The orange polytopes are the geometric representations of FGCSs.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Top to bottom: Results on Cel...
[{"Category": "Methodological Basis", "Citation": "(Kendall et al., 2018)", "Explanation": "The cited work by Kendall et al. provides the foundational theory of multi-task learning, which the citing paper builds upon to address the multi-objective optimization problem in the context of lifelong or continual learning."}...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b16", "b3", "b12", "b23", "b7", "b28", "b30", "b14", "b8", "b28", "b24", "b17" ], "table_ref": [], "text": "Chinese Spelling Correction (CSC) is a task...
2023-05-24
10.18653/v1/2020.acl-main.81
[ { "authors": "Xingyi Cheng; Weidi Xu; Kunlong Chen; Shaohua Jiang; Feng Wang; Taifeng Wang; Wei Chu; Yuan Qi", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "SpellGCN: Incorporating phonological and visual similarities into language models for Chinese spelling chec...
[ { "formula_coordinates": [ 4, 84, 539.22, 205.74, 21.01 ], "formula_id": "formula_0", "formula_text": "si = xi, 1 ≤i ≤ n initi-n, finali-n, n + 1 ≤i ≤ n + n .(1)" }, { "formula_coordinates": [ 4, 320.05, 467.18, 180.9, 51.73 ], ...
Disentangled Phonetic Representation for Chinese Spelling Correction
Chinese Spelling Correction (CSC) aims to detect and correct erroneous characters in Chinese texts. Although efforts have been made to introduce phonetic information (Hanyu Pinyin) in this task, they typically merge phonetic representations with character representations, which tends to weaken the representation effect...
Zihong Liang; Xiaojun Quan; Qifan Wang
[ { "figure_caption": "Figure 1 :1Figure 1: The architecture of the proposed DORM, which consists of a phonetics-aware input sequence S, an encoder with separation mask, a pinyin-to-character objective, and a self-distillation module. X is the original input sentence, R is the pinyin sequence of X, Y is the corre...
[{"Category": "Methodological Basis", "Citation": "(Martins and Silva, 2004)", "Explanation": "The cited work by Martins and Silva (2004) provides a foundational method for Chinese spelling correction (CSC) that is adopted in the citing paper to address the task of detecting and correcting erroneous characters in Chine...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b7", "b11", "b23", "b14", "b27", "b26", "b28", "b29", "b30", "b33", "b13", "b16", "b20", "b21", "b3", "b5", "b8", "b15", ...
2023-05-24
[ { "authors": "A Gary; Edwin R Atkinson; Hancock", "journal": "IEEE Trans. Image Process", "ref_id": "b0", "title": "Recovery of surface orientation from diffuse polarization", "year": "2006" }, { "authors": "Yunhao Ba; Alex Gilbert; Franklin Wang; Jinfa Yang; Rui Chen; Yiqin Wang; Lei Y...
[ { "formula_coordinates": [ 4, 140.39, 285.8, 242.56, 22.54 ], "formula_id": "formula_0", "formula_text": "I = (P 0 + P 45 + P 90 + E 135 ) 2(1)" }, { "formula_coordinates": [ 6, 84.43, 283.28, 298.52, 20.75 ], "formula_id": ...
Polarimetric Imaging for Perception
Autonomous driving and advanced driver-assistance systems rely on a set of sensors and algorithms to perform the appropriate actions and provide alerts as a function of the driving scene. Typically, the sensors include color cameras, radar, lidar and ultrasonic sensors. Strikingly however, although light polarization i...
Michael Baltaxe; Tomer Pe'er; Dan Levi
[ { "figure_caption": "Figure 1 :1Figure 1: Examples of collected data. Each row shows a different sample with RGB (left), AoLP (middle left), DoLP (middle right) and lidar projected on RGB (right). The cyclic color map in the AoLP images goes from red for 0 • to magenta for 179 • . In the DoLP images black corre...
[{"Category": "Methodological Basis", "Citation": "[8,12,24]", "Explanation": "The cited works provide a basis for free space detection in ADAS and autonomous vehicles, which the citing paper builds upon to plan and act appropriately on the road."}, {"Category": "Methodological Basis", "Citation": "[15,28]", "Explanati...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b50", "b6", "b49", "b27", "b7", "b55", "b49", "b52", "b43", "b42" ], "table_ref": [], "text": "Transformer-based (Vaswani et al., 2017) language models (LMs) have ...
2023-11-04
10.18653/v1/2021.deelio-1.3
[ { "authors": "Joshua Ainslie; Tao Lei; Santiago Michiel De Jong; Siddhartha Ontañón; Yury Brahma; David Zemlyanskiy; Mandy Uthus; James Guo; Yi Lee-Thorp; Yun-Hsuan Tay; Sumit Sung; Sanghai", "journal": "", "ref_id": "b0", "title": "CoLT5: Faster long-range transformers with conditional computation...
[ { "formula_coordinates": [ 3, 311.01, 542.83, 208.54, 33.96 ], "formula_id": "formula_0", "formula_text": "L = - 1 N n i=1 m i t=1 log p(x i t | x i 1 , . . . , x i t-1 , σ <i )." }, { "formula_coordinates": [ 5, 102.42, 73.15, 409.75, ...
Adapting Language Models to Compress Contexts
Transformer-based language models (LMs) are powerful and widely-applicable tools, but their usefulness is constrained by a finite context window and the expensive computational cost of processing long text documents. We propose to adapt pre-trained LMs into AutoCompressors. These language models are capable of compress...
Alexis Chevalier; Alexander Wettig; Anirudh Ajith; Danqi Chen
[ { "figure_caption": "Figure 2 :2Figure2: Perplexity on 2048 held-out tokens given different numbers of compressed tokens. Compression is performed on up to 3 segments of 2048 tokens. Ablations show that the different components of our finetuning strategy help boost performance and that stopgradients do not impa...
[{"Category": "Methodological Basis", "Citation": "(Vaswani et al., 2017)", "Explanation": "The cited work introduces the concept of language models, which serves as the basis for the research conducted in the citing paper on teaching pre-trained LMs to compress text into summary vectors."}, {"Category": "Supporting Ev...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b29", "b12", "b24", "b50", "b16", "b38" ], "table_ref": [], "text": "Counterfactual generation, designed to eliminate spurious correlations in data, is a crucial technique used in causal ...
2024-02-23
10.18653/v1/D15-1075
[ { "authors": "", "journal": "Bibliographical References", "ref_id": "b0", "title": "", "year": "" }, { "authors": "R Samuel; Gabor Bowman; Christopher Angeli; Christopher D Potts; Manning", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "A l...
[ { "formula_coordinates": [ 3, 86.28, 380.15, 204.59, 25.14 ], "formula_id": "formula_0", "formula_text": "P (Y |X) = 1 ||XC ∪ XT || x i ∈(X C ∪X T ) P (Y |xi),(1)" }, { "formula_coordinates": [ 3, 102.37, 417.78, 52.24, 9.65 ], ...
Prompting Large Language Models for Counterfactual Generation: An Empirical Study
Large language models (LLMs) have made remarkable progress in a wide range of natural language understanding and generation tasks. However, their ability to generate counterfactuals has not been examined systematically. To bridge this gap, we present a comprehensive evaluation framework on various types of NLU tasks, w...
Yongqi Li; Mayi Xu; Xin Miao; Shen Zhou; Tieyun Qian
[ { "figure_caption": "Figure 1 :1Figure 1: (a) Structural causal model of the SA task, (b) Intervention operation.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Left: The proposed framework for evaluati...
[{"Category": "Methodological Basis", "Citation": "(Pearl, 1993)", "Explanation": "The cited work by Pearl (1993) introduces the concept of causal intervention, which is a crucial technique for counterfactual generation. The citing paper adopts this technique to enhance the robustness and performance of neural network ...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b5", "b29", "b37", "b29", "b11", "b14", "b14" ], "table_ref": [], "text": "A wealth of information exists in the form of structured knowledge, such as movie information databases or...
2023-07-11
10.1162/coli.07-034-R2
[ { "authors": "Oshin Agarwal; Mihir Kale; Heming Ge; Siamak Shakeri; Rami Al-Rfou", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Machine translation aided bilingual data-to-text generation and semantic parsing", "year": "2020" }, { "authors": "Ron Arts...
[ { "formula_coordinates": [ 4, 91.77, 421.03, 177.74, 16.59 ], "formula_id": "formula_0", "formula_text": "L d ′ = -1 |d| |d| i=0 log p(d i |d 0 , ..., d i-1 , t)" }, { "formula_coordinates": [ 4, 94.81, 555.7, 171.66, 16.59 ], ...
Faithful Low-Resource Data-to-Text Generation through Cycle Training
Methods to generate text from structured data have advanced significantly in recent years, primarily due to fine-tuning of pre-trained language models on large datasets. However, such models can fail to produce output faithful to the input data, particularly on out-of-domain data. Sufficient annotated data is often not...
Zhuoer Wang; Marcus Collins; Nikhita Vedula; Simone Filice; Shervin Malmasi; Oleg Rokhlenko
[ { "figure_caption": "Figure 1 :1Figure 1: Cycle Training of the Data-to-Text model and Text-to-Data model. For each cycle, the upper-level models are frozen to generate the intermediate text for the training of the lower-level models, that attempt to reconstruct the initial inputs (d, t denote initial inputs of...
[{"Category": "Data Source", "Citation": "WebNLG (Castro Ferreira et al., 2020)", "Explanation": "The cited work is a public dataset that the citing paper uses to tackle the data-to-text generation task for a variety of purposes."}, {"Category": "Data Source", "Citation": "ToTTo (Parikh et al., 2020)", "Explanation": "...
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b25", "b35", "b4", "b20" ], "table_ref": [], "text": "As large language models (LLMs) are deployed widely, the need to keep their knowledge correct and up-to-date without massive r...
[ { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "", "ref_id": "b0", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Wei-Li...
[ { "formula_coordinates": [ 3, 306.14, 567.16, 220.08, 40 ], "formula_id": "formula_0", "formula_text": "C = ⟨(s 1 , r 1 , o 1 ), . . . , (s n , r n , o n )⟩, we randomly sam- ple t ∈ {1, . . . , N } counterfactual edits in C." }, { "formula_coordinates": [ ...
MQUAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions
The information stored in large language models (LLMs) falls out of date quickly, and retraining from scratch is often not an option. This has recently given rise to a range of techniques for injecting new facts through updating model weights. Current evaluation paradigms are extremely limited, mainly validating the re...
Zexuan Zhong; Zhengxuan Wu; Christopher D Manning; Christopher Potts; Danqi Chen; Boris Johnson; Rishi Sunak
[ { "figure_caption": "Figure 1 :1Figure 1: An example of our benchmark MQUAKE. Existing knowledge-editing methods often perform well at answering paraphrased questions of the edited fact but fail on multi-hop questions that are entailed consequences of the edited fact.", "figure_data": "", "figure_id": "...
[{"Category": "Supporting Evidence", "Citation": "(Sinitsin et al., 2020)", "Explanation": "The cited work highlights the importance of keeping language models up-to-date and correct, which is a key focus of the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Zhu et al., 2021;De Cao et al., 202...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b4", "b5", "b6", "b13", "b14", "b15" ], "table_ref": [], "text": "In contemporary times, the Language Model (LM) [1; 2] has emerged as a pivotal player in the field of Natural Language Pr...
2024-01-23
[ { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton ; Lee Kristina; Toutanova ", "journal": "", "ref_id": "b0", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbia...
[ { "formula_coordinates": [ 4, 108, 713.17, 66.91, 9.68 ], "formula_id": "formula_1", "formula_text": "C 1 , ..., C N ∈ S." } ]
Exploring Diverse In-Context Configurations for Image Captioning
After discovering that Language Models (LMs) can be good in-context few-shot learners, numerous strategies have been proposed to optimize in-context sequence configurations. Recently, researchers in Vision-Language (VL) domains also develop their few-shot learners, while they only use the simplest way, i.e. , randomly ...
Xu Yang; Yongliang Wu; Mingzhuo Yang; Haokun Chen; Xin Geng
[ { "figure_caption": "Figure 2 :2Figure 2: Image selection strategies: (a) SIIR-CLIP, (b) SIIR-TAG, (c) DIIR-TT, (d) SICR-CLIP. 3 Configuring In-Context Sequences The in-context captioning can be treated as a vision-language conditioned text generation task. Given the multi-modal in-context sequence S = {(I 1 , ...
[{"Category": "Methodological Basis", "Citation": "[1; 2]", "Explanation": "The cited works are the Language Model (LM) that have emerged as a pivotal player in the field of Natural Language Processing (NLP), providing a unified approach to a range of diverse tasks by using a shared prompt paradigm."}, {"Category": "Ex...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0" ], "table_ref": [], "text": "In-context learning (ICL) with large language models (LLMs) has shown great potential in performing a wide range of language tasks (Brown et al., 2020). ICL has the unique advantages o...
2023-10-26
[ { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Ch...
[ { "formula_coordinates": [ 3, 86.8, 368.05, 32.73, 16 ], "formula_id": "formula_0", "formula_text": "(1) i , y(1)" }, { "formula_coordinates": [ 3, 158.99, 368.05, 40.86, 16 ], "formula_id": "formula_1", "formula_text": ...
Estimating Large Language Model Capabilities without Labeled Test Data
Large Language Models (LLMs) have the impressive ability to perform in-context learning (ICL) from only a few examples, but the success of ICL varies widely from task to task. Thus, it is important to quickly determine whether ICL is applicable to a new task, but directly evaluating ICL accuracy can be expensive in sit...
Harvey Yiyun Fu; Qinyuan Ye; Albert Xu; Xiang Ren; Robin Jia
[ { "figure_caption": "Figure 2 :2Figure2: Bar graph of evaluation results (MAE) for all meta-models, baseline methods, and Oracle baselines of all 3 dataset collections with all 4 LLMs. We use the confidence vector as the meta-feature. Red/blue bars represent the meta-model/baseline evaluation results and the ho...
[{"Category": "Supporting Evidence", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) provides foundational evidence of the potential of in-context learning (ICL) with large language models (LLMs) in performing a wide range of language tasks, which supports the claims and hypoth...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b16", "b27", "b5", "b10", "b7", "b0", "b25", "b23", "b36", "b24", "b10", "b40", "b3", "b5", "b13", "b23", "b18", "b38", "...
10.18653/v1/2022.acl-long.589
[ { "authors": "Iz Beltagy; Matthew E Peters; Arman Cohan", "journal": "", "ref_id": "b0", "title": "Longformer: The long-document transformer", "year": "2020" }, { "authors": "Amanda Bertsch; Uri Alon; Graham Neubig; Matthew R Gormley", "journal": "", "ref_id": "b1", "title": ...
[ { "formula_coordinates": [ 2, 315.64, 75.72, 199.27, 71.08 ], "formula_id": "formula_0", "formula_text": "Approach In→Out Enc Enc←Dec Dec Efficient Attention x → y ■ ■ Extract-Abstract xe → y □ ⋆ Dynamic Weight x → y □ ■ + ⋆ Divide-Conquer xi → yi □ □" }, { "for...
AWESOME: GPU Memory-constrained Long Document Summarization using Memory Mechanism and Global Salient Content
Long document summarization systems are critical for domains with lengthy and jargonladen text, yet they present significant challenges to researchers and developers with limited computing resources. Existing solutions mainly focus on efficient attentions or divideand-conquer strategies. The former reduces theoretical ...
Shuyang Cao; Lu Wang
[ { "figure_caption": "Model", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Running time (batch per second) of each model. A higher number of batches processed per second indicates a faster running speed. ...
[{"Category": "Methodological Basis", "Citation": "(Lewis et al., 2020)", "Explanation": "The cited work by Lewis et al. provides the performance benchmarks for large pre-trained transformer models in abstractive summarization, which the citing paper uses to evaluate the performance of their own model."}, {"Category": ...
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b16", "b17", "b2", "b10" ], "table_ref": [], "text": "In search engines, the queries issued by users are mostly broad and vague [15,16]. This problem is extremely crucial for product search scenarios...
2023-05-24
10.1145/3539618.3591900
[ { "authors": "Qingyao Ai; Daniel N Hill; S V N Vishwanathan; W Bruce Croft", "journal": "ACM", "ref_id": "b0", "title": "A Zero Attention Model for Personalized Product Search", "year": "2019-11-03" }, { "authors": "Qingyao Ai; Lakshmi Narayanan; Ramasamy ", "journal": "ACM", "re...
[ { "formula_coordinates": [ 5, 371.15, 416, 187.59, 21.99 ], "formula_id": "formula_0", "formula_text": "IE(𝑞) = ∑︁ 𝑝 ∈ I (𝑞) -𝑃 (𝑝 |𝑞) log 2 𝑃 (𝑝 |𝑞)(1)" }, { "formula_coordinates": [ 5, 366.14, 443.22, 189.43, 21.78 ],...
JDsearch: A Personalized Product Search Dataset with Real Queries and Full Interactions
Recently, personalized product search attracts great attention and many models have been proposed. To evaluate the effectiveness of these models, previous studies mainly utilize the simulated Amazon recommendation dataset, which contains automatically generated queries and excludes cold users and tail products. We argu...
Jiongnan Liu; Zhicheng Dou; Guoyu Tang; Sulong Xu
[ { "figure_caption": "4 https://github.com/rucliujn/JDsearch", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: The log-log distribution of product's interaction frequency", "figure_data": "", "figure...
[{"Category": "Data Source", "Citation": "[1-6, 9, 11]", "Explanation": "The cited works are the most widely-used personalized product search datasets based on real user behaviors, which serve as the foundational data for the research conducted in the citing paper on personalized product search."}, {"Category": "Data S...
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b40", "b50", "b38", "b66", "b5", "b57", "b42", "b90", "b62", "b63", "b0", "b73", "b81", ...
2023-05-24
[ { "authors": "Eric Arazo", "journal": "", "ref_id": "b0", "title": "Pseudo-labeling and confirmation bias in deep semi-supervised learning", "year": "2020" }, { "authors": "Philip Bachman; Ouais Alsharif; Doina Precup", "journal": "NeurIPS", "ref_id": "b1", "title": "Learning...
[ { "formula_coordinates": [ 5, 42.11, 408.78, 238.07, 31.61 ], "formula_id": "formula_0", "formula_text": "ℓ labeled cls = (x,y * )∼D l K k=1 Cls(y * , p k (y|x, b k-1 )),(1)" }, { "formula_coordinates": [ 5, 42.11, 446.09, 233.83, 3...
Semi-Supervised and Long-Tailed Object Detection with CascadeMatch
This paper focuses on long-tailed object detection in the semi-supervised learning setting, which poses realistic challenges, but has rarely been studied in the literature. We propose a novel pseudolabeling-based detector called CascadeMatch. Our detector features a cascade network architecture, which has multi-stage d...
Yuhang Zang; Kaiyang Zhou; Chen Huang; Chen Change Loy
[ { "figure_caption": "Fig. 1 :1Fig. 1: Motivation of our research. (a) The Average Precision (AP) and Average Recall (AR) curves, obtained using different fixed confidence thresholds (denoted by τ ). Clearly, none of the chosen thresholds gives the best trade-off. (b) The distribution of prediction scores for a ...
[{"Category": "Supporting Evidence", "Citation": "[40]", "Explanation": "The cited work, the COCO dataset, is used as a benchmark for evaluating the performance of semi-supervised object detectors in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[16]", "Explanation": "The cited work, LVIS, is a ...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b26", "b30", "b11", "b49", "b3", "b50", "b8" ], "table_ref": [], "text": "A core challenge in deploying NLP systems lies in managing temporal misalignment, where a model that is tra...
10.18653/v1/2020.emnlp-main.550
[ { "authors": "Stephen Anastasios N Angelopoulos; Adam Bates; Lihua Fisch; Tal Lei; Schuster", "journal": "", "ref_id": "b0", "title": "Conformal risk control", "year": "2022" }, { "authors": "Hessam Bagherinezhad; Hannaneh Hajishirzi; Yejin Choi; Ali Farhadi", "journal": "", "re...
[]
Mitigating Temporal Misalignment by Discarding Outdated Facts
While large language models are able to retain vast amounts of world knowledge seen during pretraining, such knowledge is prone to going out of date and is nontrivial to update. Furthermore, these models are often used under temporal misalignment, tasked with answering questions about the present, despite having only b...
Michael J Q Zhang; Eunsol Choi
[ { "figure_caption": "FactFigure 1 :1Figure1: We depict the critical timestamps at play in open-retrieval QA systems. In the example on the left, the temporal misalignment between when the system was trained and evaluated has no affect on the answer. On the right, the answer has changed, causing the system to ou...
[{"Category": "Methodological Basis", "Citation": "(Lazaridou et al., 2021)", "Explanation": "The cited work highlights the issue of temporal misalignment in deploying NLP systems, which the citing paper addresses by proposing methods to manage the misalignment."}, {"Category": "Supporting Evidence", "Citation": "(Luu ...
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b10", "b1", "b18", "b26", "b11", "b14", "b22", "b6", "b3", "b17", "b25", "b6", "b17", "b22", "b10", "b3", "b17", ...
2023-11-14
10.18653/v1/2020.nlp4convai-1.5
[ { "authors": "", "journal": "L-BERT TAPT", "ref_id": "b0", "title": "", "year": "" }, { "authors": "References Pavel Burnyshev; A Bout; Anfrey Bout; Valentin Malykh; Irina Piontkovskaya", "journal": "", "ref_id": "b1", "title": "Infobert: Zeroshot approach to natural language...
[ { "formula_coordinates": [ 2, 360.78, 345.38, 164.36, 30.47 ], "formula_id": "formula_0", "formula_text": "c n = 1 K x n,i ∈Sn f ϕ (x n,i )(1)" }, { "formula_coordinates": [ 2, 313.41, 625.12, 211.73, 30.47 ], "formula_id": ...
Pre-training Intent-Aware Encoders for Zero-and Few-Shot Intent Classification
Intent classification (IC) plays an important role in task-oriented dialogue systems. However, IC models often generalize poorly when training without sufficient annotated examples for each user intent. We propose a novel pretraining method for text encoders that uses contrastive learning with intent psuedo-labels to p...
Mujeen Sung; James Gung; Elman Mansimov; Nikolaos Pappas; Raphael Shu; Salvatore Romeo; Yi Zhang; Vittorio Castelli
[ { "figure_caption": "Figure 1 :1Figure 1: Overview of pre-training the intent-aware encoder (PIE). Given an utterance x 1 from pre-training corpus, we generate a pseudo intent name y pseudo", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figu...
[{"Category": "Methodological Basis", "Citation": "(Liu et al., 2019a)", "Explanation": "The cited work by Liu et al. provides a method for few-shot text classification that the citing paper adopts to tackle the challenge of data collection and re-training models in task-oriented dialogue systems."}, {"Category": "Meth...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b10", "b7", "b15", "b17", "b14", "b12", "b0", "b7", "b15", "b17", "b14", "b12", "b0", "b17", "b5", "b7" ], "table_ref": [], "te...
[ { "authors": "Anpei Chen; Zexiang Xu; Andreas Geiger; Jingyi Yu; Hao Su", "journal": "", "ref_id": "b0", "title": "Tensorf: Tensorial radiance fields", "year": "2022" }, { "authors": "Anpei Chen; Zexiang Xu; Fuqiang Zhao; Xiaoshuai Zhang; Fanbo Xiang; Jingyi Yu; Hao Su", "journal": "...
[ { "formula_coordinates": [ 3, 183.78, 556.48, 320.89, 30.32 ], "formula_id": "formula_0", "formula_text": "Ĉ(r) = N -1 i=0 T i (1 -exp(-σ i δ i ))c i , T i = exp(- i-1 j=0 σ j δ j ).(1)" }, { "formula_coordinates": [ 3, 108, 624.92, 96.94...
OD-NeRF: Efficient Training of On-the-Fly Dynamic Neural Radiance Fields
Dynamic neural radiance fields (dynamic NeRFs) have demonstrated impressive results in novel view synthesis on 3D dynamic scenes. However, they often require complete video sequences for training followed by novel view synthesis, which is similar to playing back the recording of a dynamic 3D scene. In contrast, we prop...
Zhiwen Yan; Chen Li; Gim Hee Lee
[ { "figure_caption": "Figure 1 :1Figure 1: We introduce the on-the-fly training(left) of dynamic NeRFs and the OD-NeRF model(right). In on-the-fly training, the dynamic NeRF is trained based on the current and previous training frames to synthesize novel views for the current time step. Our OD-NeRF leverages the...
[{"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work introduces the concept of neural radiance fields (NeRF), which serves as the basis for the research conducted in the citing paper to develop a new 3D volumetric representation capable of synthesizing photo-realistic novel views fro...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b4", "b22", "b18", "b14", "b36", "b5", "b13", "b8", "b17" ], "table_ref": [], "text": "Document summarization aims to compress text material while retaining its most sal...
2023-10-09
10.18653/v1/D19-1435
[ { "authors": "Afra Feyza Akyürek; Ekin Akyürek; Aman Madaan; Ashwin Kalyan; Peter Clark; Derry Wijaya; Niket Tandon", "journal": "", "ref_id": "b0", "title": "Rl4f: Generating natural language feedback with reinforcement learning for repairing model outputs", "year": "2023" }, { "authors...
[ { "formula_coordinates": [ 4, 103.17, 243.19, 186.69, 33.58 ], "formula_id": "formula_0", "formula_text": "p S (y 0 | x) = m t=1 p S y 0 t | y 0 <t , x ,(1)" }, { "formula_coordinates": [ 4, 306.14, 244.68, 62.6, 10.72 ], "f...
SummIt: Iterative Text Summarization via ChatGPT
Text summarization systems have made significant progress in recent years, but typically generate summaries in one single step. However, the one-shot summarization setting is sometimes inadequate, as the generated summary may contain hallucinations or overlook essential details related to the reader's interests. This p...
Haopeng Zhang; Xiao Liu; Jiawei Zhang
[ { "figure_caption": "Figure 1 :1Figure 1: An illustration of the iterative summarization process. The summarizer continuously refines the summary according to self-feedback from the evaluator at each iteration.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figur...
[{"Category": "Methodological Basis", "Citation": "(Cheng and Lapata, 2016)", "Explanation": "The cited work by Cheng and Lapata (2016) provides a methodological basis for the development of end-to-end summarization systems by introducing the use of neural networks and pre-trained language models in summarization syste...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b25", "b4", "b1", "b10", "b15", "b2", "b3", "b20", "b14", "b36", "b0", "b37" ], "table_ref": [], "text": "Autonomous driving is a rapidly developing fi...
[ { "authors": "P Anderson; X He; C Buehler; D Teney; M Johnson; S Gould; L Zhang", "journal": "", "ref_id": "b0", "title": "Bottom-up and top-down attention for image captioning and visual question answering", "year": "2018" }, { "authors": "S Antol; A Agrawal; J Lu; M Mitchell; D Batra; ...
[ { "formula_coordinates": [ 3, 352.97, 376.64, 205.03, 23.23 ], "formula_id": "formula_0", "formula_text": "θ = cos -1 (B 1 [: 2] -B 2 [: 2]) • V ego [: 2] ∥B 1 [: 2] -B 2 [: 2]∥∥V ego [: 2]∥ ,(1)" }, { "formula_coordinates": [ 3, 420.54, 418.37...
NuScenes-QA: A Multi-Modal Visual Question Answering Benchmark for Autonomous Driving Scenario
We introduce a novel visual question answering (VQA) task in the context of autonomous driving, aiming to answer natural language questions based on street-view clues. Compared to traditional VQA tasks, VQA in autonomous driving scenario presents more challenges. Firstly, the raw visual data are multi-modal, including ...
Tianwen Qian; Jingjing Chen; Linhai Zhuo; Yang Jiao; Yu-Gang Jiang
[ { "figure_caption": "Figure 2 :2Figure 2: Data construction flow of NuScenes-QA. First, the scene graphs are generated using the annotated object labels and 3D bounding boxes. Then, we design question templates manually, and instantiate the question-answer pairs with them. Finally, the generated data are filter...
[{"Category": "Data Source", "Citation": "(Liu et al. 2023)", "Explanation": "The cited work by Liu et al. is a source of data for the autonomous driving system, providing information on 3D object detection that is used in the system."}, {"Category": "Data Source", "Citation": "(Jiao et al. 2023b)", "Explanation": "The...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b31", "b8", "b4", "b18", "b32", "b34", "b33", "b30", "b7", "b3", "b36", "b5", "b3", "b15", "b3", "b26", "b14", "b37", "b2...
2023-05-24
[ { "authors": "", "journal": "FLOPs ↓ Throughput ↑ ViT-B", "ref_id": "b0", "title": "Model Efficiency Top1 Acc.(%) ↑ Resolution #Params", "year": "" }, { "authors": "", "journal": "", "ref_id": "b1", "title": "Comparison with baselines. Except for E-ViT, which undergoes traini...
[ { "formula_coordinates": [ 4, 101.83, 425.68, 184.54, 11.72 ], "formula_id": "formula_0", "formula_text": "X = x i ∈ R d |i = 1, 2. . . , N ,(1)" }, { "formula_coordinates": [ 4, 90.03, 489.24, 196.34, 9.65 ], "formula_id": ...
Predicting Token Impact Towards Efficient Vision Transformer
Token filtering to reduce irrelevant tokens prior to selfattention is a straightforward way to enable efficient vision Transformer. This is the first work to view token filtering from a feature selection perspective, where we weigh the importance of a token according to how much it can change the loss once masked. If t...
Hong Wang; Su Yang; Xiaoke Huang; Weishan Zhang
[ { "figure_caption": "Figure 1 .1Figure 1. The overall training process: The two branches of the vision Transformer are in fact the same one, whose parameters are fixed during training. The delta loss and ρ refer to Eq. (7) and Eq. (8).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", ...
[{"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work, Swin Transformer, is a method that the citing paper adopts to reduce the computational load in vision Transformer models by confining self-attention to a local neighborhood."}, {"Category": "Methodological Basis", "Citation": "[33...
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b32", "b40", "b28", "b38", "b1", "b15", "b29", "b39", "b16", "b0", "b13", "b5", "b30", "b5", "b17", "b17" ], "ta...
2023-10-24
10.18653/v1/D17-1303
[ { "authors": "Emanuele Bugliarello; Fangyu Liu; Jonas Pfeiffer; Siva Reddy; Desmond Elliott; Edoardo ; Maria Ponti; Ivan Vulić", "journal": "", "ref_id": "b0", "title": "Iglue: A benchmark for transfer learning across modalities, tasks, and languages", "year": "2022" }, { "authors": "Yen...
[ { "formula_coordinates": [ 3, 306.14, 500.51, 83.33, 10.69 ], "formula_id": "formula_0", "formula_text": "T = {T 1 , ..., T N }," }, { "formula_coordinates": [ 3, 386.27, 606.41, 138.87, 33.18 ], "formula_id": "formula_1", ...
Meta-learning For Vision-and-language Cross-lingual Transfer
Current pre-trained vision-language models (PVLMs) achieve excellent performance on a range of multi-modal datasets. Recent work aims at building multilingual versions of such models, and a range of multilingual multimodal datasets have been introduced for this purpose. However, current PVLMs typically perform poorly o...
Hanxu Hu; Frank Keller
[ { "figure_caption": "Figure 1 :1Figure 1: Examples in IGLUE (Bugliarello et al., 2022) benchmark. The left example comes from MaRVL (Liu et al., 2021) dataset, and the right example comes from XVNLI dataset proposed in IGLUE.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figur...
[{"Category": "Methodological Basis", "Citation": "(Chen et al., 2020)", "Explanation": "The cited work by Chen et al. provides a pre-trained vision-language model that the citing paper adopts in their research to perform multi-modal tasks."}, {"Category": "Data Source", "Citation": "(Lu et al., 2019)", "Explanation": ...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b4", "b5", "b33", "b16", "b28", "b27", "b16", "b26" ], "table_ref": [], "text": "Predicting and modeling sequences of events has become more sophisticated over the past decade...
2023-05-24
10.18653/v1/P19-1470
[ { "authors": "Antoine Bosselut; Hannah Rashkin; Maarten Sap; Chaitanya Malaviya; Asli Celikyilmaz; Yejin Choi", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "COMET: Commonsense transformers for automatic knowledge graph construction", "year": "2019" }, { ...
[ { "formula_coordinates": [ 3, 323.24, 620.91, 201.9, 21.12 ], "formula_id": "formula_0", "formula_text": "r(s, Ŝ) = max ŝ∈ Ŝ (max(E(s, ŝ), E(ŝ, s)))(1)" }, { "formula_coordinates": [ 5, 334.93, 699.92, 50.39, 9.57 ], "formul...
Drafting Event Schemas using Language Models
Past work has studied event prediction and event language modeling, sometimes mediated through structured representations of knowledge in the form of event schemas. Such schemas can lead to explainable predictions and forecasting of unseen events given incomplete information. In this work, we look at the process of cre...
Anisha Gunjal; Greg Durrett
[ { "figure_caption": "\"List causes and events that can happen over the course of a [d]? Causes of a [d]: 1.\" Causes and temporally-aided prompt \"List causes and events that can happen before, during and after a [d]? Causes of a [d]: 1.\" One-Shot Prompts Figure 2 depicts a sample of the one-shot prompts we us...
[{"Category": "Methodological Basis", "Citation": "(Chambers and Jurafsky, 2008)", "Explanation": "The cited work by Chambers and Jurafsky in 2008 introduced the concept of mining narrative schemas using predicate-role pairs, which has been adopted in later works for event prediction and cloze tasks."}, {"Category": "M...
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b4", "b5", "b6", "b7", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b6", "b17", "b18", "b19", "...
2023-05-24
[ { "authors": "M Wang; W Deng", "journal": "Neurocomputing", "ref_id": "b0", "title": "Deep Face Recognition: A Survey", "year": "2021" }, { "authors": "P Li; L Prieto; D Mery; P J Flynn", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b1", "t...
[ { "formula_coordinates": [ 2, 311.98, 617, 71.2, 10.31 ], "formula_id": "formula_0", "formula_text": "Q * = H(M (I))" }, { "formula_coordinates": [ 3, 169.29, 490.68, 119.19, 13.38 ], "formula_id": "formula_1", "formula_...
Optimization-Based Improvement of Face Image Quality Assessment Techniques
Contemporary face recognition (FR) models achieve near-ideal recognition performance in constrained settings, yet do not fully translate the performance to unconstrained (realworld) scenarios. To help improve the performance and stability of FR systems in such unconstrained settings, face image quality assessment (FIQA...
Žiga Babnik; Naser Damer; Vitomir Štruc
[ { "figure_caption": "Fig. 1 .1Fig.1. Overview of the proposed method that consists of: Label Optimization and Transfer Learning. The label-optimization step incorporates information extracted from mated image pairs into quality scores precomputed with an existing FIQA technique. The transfer-learning step is th...
[{"Category": "Supporting Evidence", "Citation": "[1], [2]", "Explanation": "The cited works highlight the challenges of out-of-distribution data in real-world scenarios, which motivates the need for face image quality assessment techniques in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[3]- ...
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b12", "b34", "b24", "b11", "b54", "b42", "b42", "b43" ], "table_ref": [], "text": "Recent advances in NLP primarily focus on t...
10.1162/tacl_a_00416
[ { "authors": "David Ifeoluwa Adelani; Jade Abbott; Graham Neubig; D' Daniel; Julia Souza; Constantine Kreutzer; Chester Lignos; Happy Palen-Michel; Shruti Buzaaba; Sebastian Rijhwani; Stephen Ruder; Israel Mayhew; Shamsuddeen H Abebe Azime; Chris Chinenye Muhammad; Joyce Emezue; Perez Nakatumba-Nabende; Aremu O...
[ { "formula_coordinates": [ 2, 315.93, 91.73, 189.85, 58.29 ], "formula_id": "formula_0", "formula_text": "XTREME ✓ XTREME-R ✓ XGLUE ✓ ✓ CrossFit ✓ ✓ MEGA* ✓ ✓ BUFFET ✓ ✓ ✓ ✓" }, { "formula_coordinates": [ 19, 214.94, 452.98, 226.77, ...
BUFFET: Benchmarking Large Language Models for Few-shot Cross-lingual Transfer
Despite remarkable advancements in few-shot generalization in natural language processing, most models are developed and evaluated primarily in English. To facilitate research on few-shot cross-lingual transfer, we introduce a new benchmark, called BUFFET, which unifies 15 diverse tasks across 54 languages in a sequenc...
Akari Asai; Sneha Kudugunta; Xinyan Velocity; Terra Blevins; Hila Gonen; Machel Reid; Yulia Tsvetkov; Sebastian Ruder; Hannaneh Hajishirzi; Victor Sanh; Albert Webson; Colin Raffel; Stephen Bach; Lintang Sutawika; Zaid Alyafeai; Antoine Chaffin; Arnaud Stiegler; Arun Raja; Manan Dey; Saiful Bari; Canwen Xu; Urmish Thak...
[ { "figure_caption": "Figure 1 :1Figure 1: BUFFET includes unified diverse tasks in the same format, covering many typologically diverse languages. It enables a fair comparison across models, transfer methods, and languages and facilitates largescale analysis across different learning setups.", "figure_data"...
[{"Category": "Supporting Evidence", "Citation": "(Blasi et al., 2022)", "Explanation": "The cited work by Blasi et al. (2022) highlights the focus of recent advances in NLP on the English language, which serves as a foundational point for the citing paper to build upon."}, {"Category": "Supporting Evidence", "Citation...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b37", "b17", "b32", "b4", "b61", "b42", "b54", "b44", "b57", "b31" ], "table_ref": [], "text": "Large language models (LLMs) have evolved considerably in size, arc...
2023-11-19
10.18653/v1/2021.emnlp-main.98
[ { "authors": "Jimmy Ba; Jamie Ryan Kiros; Geoffrey E Hinton", "journal": "", "ref_id": "b0", "title": "Layer normalization", "year": "2016" }, { "authors": "Yoshua Bengio; Réjean Ducharme; Pascal Vincent; Christian Janvin", "journal": "J. Mach. Learn. Res", "ref_id": "b1", "t...
[ { "formula_coordinates": [ 3, 105.69, 611.93, 184.17, 33.71 ], "formula_id": "formula_0", "formula_text": "p(X) = N i=1 p (x i |x 1 , x 2 , ..., x i-1 )(1)" }, { "formula_coordinates": [ 3, 310.71, 97.68, 214.43, 44.01 ], "f...
How To Train Your (Compressed) Large Language Model
With the increase in the size of large language models (LLMs), we need compression methods that can reduce the model size while preserving the generality and zero-shot promptability of the model. This goal is more ambitious than the typical compression setup, which reduces the model's size at the expense of specializin...
Ananya Harsh Jha; Tom Sherborne; Evan Pete Walsh; Dirk Groeneveld; Emma Strubell; Iz Beltagy
[ { "figure_caption": "Figure 1 :1Figure 1: Truncated initialization configurations for layer pruning in a decoder-only language model. Highlighted layers (green) are removed. Our method and distillation baselines remove half of the layers according to each configuration. We retain the first and last layer as the...
[{"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) introduces the GPT-3 model, which is a key methodological basis for the citing paper in terms of the development of large language models and the use of in-context learning for task-specific f...
[ { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Introduction", "publication_ref": [ "b15", "b12", "b42", "b44", "b41", "b87", "b75", "b7", "b65", "b37", "b27", "b30", "b42", "b4", "b34", "b2...
10.48550/arXiv.2305.19068
[ { "authors": "Jiaxin Bai; Xin Liu; Weiqi Wang; Chen Luo; Yangqiu Song", "journal": "", "ref_id": "b0", "title": "Complex query answering on eventuality knowledge graph with implicit logical constraints", "year": "2023" }, { "authors": "Pratyay Banerjee; Chitta Baral", "journal": "Ass...
[ { "formula_coordinates": [ 3, 306.14, 387.35, 168.84, 9.57 ], "formula_id": "formula_0", "formula_text": "D = {(h, r, t)|h ∈ H, r ∈ R, t ∈ T }" }, { "formula_coordinates": [ 3, 306.14, 439.6, 218.27, 25.07 ], "formula_id": "...
CAR: Conceptualization-Augmented Reasoner for Zero-Shot Commonsense Question Answering
The task of zero-shot commonsense question answering evaluates models on their capacity to reason about general scenarios beyond those presented in specific datasets. Existing approaches for tackling this task leverage external knowledge from CommonSense Knowledge Bases (CSKBs) by pre-training the model on synthetic QA...
Weiqi Wang; Tianqing Fang; Wenxuan Ding; Baixuan Xu; Xin Liu; Yangqiu Song; Antoine Bosselut
[ { "figure_caption": "Figure 1 :1Figure 1: An example of constructing synthetic QA pairs from CSKB(Ma et al., 2021). The simple heuristic used in this process can result in false negative options.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { ...
[{"Category": "Supporting Evidence", "Citation": "(Ma et al., 2021)", "Explanation": "The cited work by Ma et al. provides evidence that pre-trained language models struggle to generalize to distributionally different examples from their training sets, which supports the claim made in the citing paper about the limitat...
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b46", "b66", "b35", "b33", "b37", "b61", "b54", "b4", "b45", "b44", "b43", "b17", "b9" ], "table_ref": [], "text": "Text clust...
2023-11-03
10.18653/v1/2020.acl-main.372
[ { "authors": "Wenbin An; Feng Tian; Qinghua Zheng; Wei Ding; Qianying Wang; Ping Chen", "journal": "", "ref_id": "b0", "title": "Generalized category discovery with decoupled prototypical network", "year": "2022" }, { "authors": "Yuntao Bai; Saurav Kadavath; Sandipan Kundu; Amanda Askell...
[ { "formula_coordinates": [ 3, 144.66, 439.08, 145.21, 10.69 ], "formula_id": "formula_0", "formula_text": "c j = P T (I T , t),(1)" }, { "formula_coordinates": [ 3, 70.87, 622.74, 109.26, 14 ], "formula_id": "formula_1", ...
CLUSTERLLM: Large Language Models as a Guide for Text Clustering
We introduce CLUSTERLLM, a novel text clustering framework that leverages feedback from an instruction-tuned large language model, such as ChatGPT. Compared with traditional unsupervised methods that builds upon "small" embedders, CLUSTERLLM exhibits two intriguing advantages: (1) it enjoys the emergent capability of L...
Yuwei Zhang; Zihan Wang; Jingbo Shang
[ { "figure_caption": "Figure 1 :1Figure 1: LLMs like ChatGPT are not applicable for text clustering directly because of the inaccessible embeddings. CLUSTERLLM resolves the dilemma by leveraging LLM as a guide on text clustering.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "fi...
[{"Category": "Methodological Basis", "Citation": "(MacQueen, 1967)", "Explanation": "The cited work by MacQueen (1967) introduces a clustering algorithm that is adopted in the citing paper to perform text clustering on pre-trained embedders."}, {"Category": "Data Source", "Citation": "(Muennighoff et al., 2022)", "Exp...
[ { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "Introduction", "publication_ref": [ "b32", "b43", "b10" ], "table_ref": [], "text": "Large Language Models (LLMs) have demonstrated remarkable performance in solving various natural language pr...
[ { "authors": "H Stephen; Victor Bach; Zheng-Xin Sanh; Albert Yong; Colin Webson; Raffel; V Nihal; Abheesht Nayak; Taewoon Sharma; M Kim; Thibault Saiful Bari; Zaid Fevry; Manan Alyafeai; Andrea Dey; Zhiqing Santilli; Srulik Sun; Canwen Ben-David; Gunjan Xu; Han Chhablani; Jason Wang; Alan Fries; Maged S Al-Sha...
[ { "formula_coordinates": [ 3, 96.96, 175.77, 390.88, 54.56 ], "formula_id": "formula_0", "formula_text": "Log-probability Mean ZLP s(x, y) = 1 |T | t log p(y|x, t) Probability Mean ZPM s(x, y) = 1 |T | t p(y|x, t) Majority Vote ZMV s(x, y) = t 1{arg max v∈Y p(y|x, t) = ...
Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis
Previous works in prompt engineering for large language models have introduced different gradient-free probability-based prompt selection methods that aim to choose the optimal prompt among the candidates for a given task but have failed to provide a comprehensive and fair comparison between each other. In this paper, ...
Sohee Yang; Jonghyeon Kim; Joel Jang; Seonghyeon Ye; Hyunji Lee; Minjoon Seo; Yizhong Wang; Yeganeh Kordi; Swaroop Mishra; Alisa Liu; Noah A Smith; Daniel Khashabi; Hannaneh 2023 Hajishirzi; Bigscience Workshop; Teven Le Scao; Angela Fan; Christopher Akiki; Ellie Pavlick; Suzana Ilić; Daniel Hesslow; Roman Castagné; Sa...
[ { "figure_caption": "Figure 1 :1Figure 1: (a) F1 of the prompts selected by different probability-based prompt selection methods, averaged across 13 datasets. Per-dataset F1 and accuracy are shown in Figure 9. The methods without super/subscripts are the existing methods (Table1), while those with super/subscri...
[{"Category": "Methodological Basis", "Citation": "(Sorensen et al., 2022)", "Explanation": "The cited work provides the foundational concept of Mutual Information (MI) that the citing paper utilizes in the development of prompt selection methods for large language models (LLMs). The discovery in the cited work is used...
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b8", "b13", "b14", "b15", "...
2023-05-24
[ { "authors": "J Liu; F Guo; Y Zhang; B Hou; H Zhou", "journal": "Applied Intelligence", "ref_id": "b0", "title": "Defect classification on limited labeled samples with multiscale feature fusion and semi-supervised learning", "year": "2021" }, { "authors": "J Wu; J Le; Z Xiao; F Zhang; L ...
[ { "formula_coordinates": [ 8, 56.88, 173.63, 331.5, 25.84 ], "formula_id": "formula_0", "formula_text": "F G (y k ) = F 1 G (y k ) , F 2 G (y k ) , ..., F L G (y k ) and F T (y k ) = F 1 T (y k ) , F 2 T (y k ) , ." }, { "formula_coordinates": [ 8, 5...
A T E X template Multiresolution Feature Guidance Based Transformer for Anomaly Detection
Anomaly detection is represented as an unsupervised learning to identify deviated images from normal images. In general, there are two main challenges of anomaly detection tasks, i.e., the class imbalance and the unexpectedness of anomalies. In this paper, we propose a multiresolution feature guidance method based on T...
Shuting Yan; Pingping Chen; Honghui Chen; Huan Mao; Feng Chen; Zhijian Lin
[ { "figure_caption": "Fig. 11Fig. 1 Visual results from the MVTec AD datasets. Superimposed on the images are the anomaly localization map from GTrans. Red areas correspond to the located anomalies, whereas the blue areas indicate the normality regions.", "figure_data": "", "figure_id": "fig_0", "fig...
[{"Category": "Supporting Evidence", "Citation": "[1][2][3]", "Explanation": "The cited works provide evidence of the use of industrial inspection techniques in anomaly detection, which supports the claim that anomaly detection techniques have been extensively studied in various research and application domains."}, {"C...
[ { "figure_ref": [ "fig_0" ], "heading": "", "publication_ref": [ "b0", "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b5", "b7", "b8", "b8", "b9", "b10", "b0", "b11", "b8", "b9", "b12"...
2023-05-24
10.1007/sxxxxx-yyy-zzzz-1
[ { "authors": "L Wu; X He; X Wang; K Zhang; M Wang", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b0", "title": "A survey on accuracy-oriented neural recommendation: From collaborative filtering to information-rich recommendation", "year": "2022" }, { "auth...
[ { "formula_coordinates": [ 3, 310.7, 161.7, 13.05, 12.7 ], "formula_id": "formula_0", "formula_text": "C (l) p" }, { "formula_coordinates": [ 3, 310.48, 169.76, 202.57, 58.99 ], "formula_id": "formula_1", "formula_text":...
How Graph Convolutions Amplify Popularity Bias for Recommendation?
Graph convolutional networks (GCNs) have become prevalent in recommender system (RS) due to their superiority in modeling collaborative patterns. Although improving the overall accuracy, GCNs unfortunately amplify popularity bias -tail items are less likely to be recommended. This effect prevents the GCN-based RS from ...
Jiajia Chen; Jiancan Wu; Jiawei Chen; Xin Xin; Yong Li; Xiangnan He
[ { "figure_caption": "Fig. 11Fig. 1 Performance change of LightGCN with different graph convolution layers on Gowalla. Recall@20 and TR@20 stand for the overall recall score and the ratio of tail items in the top-20 recommendation list, respectively.", "figure_data": "", "figure_id": "fig_0", "figure...
[{"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work LightGCN is the backbone model used in the citing paper, and the BPR loss is employed as the loss function in the GCN-based recommender system."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited wor...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b26", "b37", "b61", "b6", "b26", "b3", "b43", "b53", "b35", "b16", "b54", "b56", "b8", "b40", "b55", "b40", "b5", "b42", ...
2023-05-25
10.1109/WACV.2019.00137
[ { "authors": "Abien Fred; Agarap ", "journal": "", "ref_id": "b0", "title": "Deep learning using rectified linear units (relu)", "year": "2018" }, { "authors": "Anthreas Antoniou; Amos Storkey; Harrison Edwards", "journal": "", "ref_id": "b1", "title": "Data Augmentation Gene...
[ { "formula_coordinates": [ 4, 253.36, 262.9, 251.3, 10.7 ], "formula_id": "formula_0", "formula_text": "L s↔ t = D [f s (x), f t (x)] .(1)" }, { "formula_coordinates": [ 4, 253.41, 314.13, 251.26, 9.65 ], "formula_id": "form...
HARD: Hard Augmentations for Robust Distillation
Knowledge distillation (KD) is a simple and successful method to transfer knowledge from a teacher to a student model solely based on functional activity. However, current KD has a few shortcomings: it has recently been shown that this method is unsuitable to transfer simple inductive biases like shift equivariance, st...
Arne F Nix; Max F Burg; Fabian H Sinz
[ { "figure_caption": "HFigure 2 :2Figure2: Fitting the student, a three-layer ReLU MLP, to the teacher function, cos(x), for 10, 000 iterations. We show results for 10 random seeds (A-D) and the distribution of (augmented) training inputs as a normalized histogram (E-H). We compare baseline (no augmentations) wi...
[{"Category": "Supporting Evidence", "Citation": "[27,37,60]", "Explanation": "The cited works provide a foundation for the use of knowledge distillation methods in transfer learning scenarios, including model compression, continual learning, and neuroscience research."}, {"Category": "Extension or Continuation", "Cita...
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b5", "b11", "b8", "b19", "b6", "b22", "b26", "b3", "b28", "b33", "b11", "b8", "b7", "b20", "b17", "b32", "b16", ...
10.18653/v1/D19-1651
[ { "authors": "Sangnie Bhardwaj; Samarth Aggarwal; Mausam Mausam", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "CaRB: A crowdsourced benchmark for open IE", "year": "2019" }, { "authors": "Sangnie Bhardwaj; Samarth Aggarwal; Mausam Mausam", "journa...
[]
PIVOINE: Instruction Tuning for Open-world Information Extraction
We consider the problem of Open-world Information Extraction (Open-world IE), which extracts comprehensive entity profiles from unstructured texts. Different from the conventional closed-world setting of Information Extraction (IE), Open-world IE considers a more general situation where entities and relations could be ...
Keming Lu; Xiaoman Pan; Kaiqiang Song; Hongming Zhang; Dong Yu; Jianshu Chen
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of Open-world IE and its two main challenges: generalization to unseen instructions and out-of-ontology entities.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "LLM\"Pa...
[{"Category": "Methodological Basis", "Citation": "(Grishman, 2015)", "Explanation": "The cited work by Grishman (2015) provides a foundational understanding of information extraction (IE) and its various tasks, which the citing paper uses to structure its own research on IE."}, {"Category": "Supporting Evidence", "Cit...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b31", "b25", "b30", "b21", "b7", "b28", "b26", "b34", "b19", "b33" ], "table_ref": [], "text": "Large language models (LLMs) are becoming mainstream and easily acc...
2024-03-10
10.18653/v1/2021.acl-long.565
[ { "authors": "Elizabeth Clark; Tal August; Sofia Serrano; Nikita Haduong; Suchin Gururangan; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "All that's 'human' is not gold: Evaluating human evaluation of generated text", "year": "2021" }, { ...
[]
M4: Multi-Generator, Multi-Domain, and Multi-Lingual Black-Box Machine-Generated Text Detection
Large language models (LLMs) have demonstrated remarkable capability to generate fluent responses to a wide variety of user queries. However, this has also raised concerns about the potential misuse of such texts in journalism, education, and academia. In this study, we strive to create automated systems that can detec...
Yuxia Wang; Jonibek Mansurov; Petar Ivanov; Jinyan Su; Artem Shelmanov; Akim Tsvigun; Chenxi Whitehouse; † Osama; Mohammed Afzal; Tarek Mahmoud; Toru Sasaki; Thomas Arnold; Alham Fikri; Nizar Habash; Iryna Gurevych; Preslav Nakov; Mohamed Bin
[ { "figure_caption": "Figure 1 :1Figure 1: Accuracy of cross-domain experiments: given generations from ChatGPT (top) or davinci (bottom), train on a single domain and test across domains across five detectors. (see more detail in Tables12 and 13)", "figure_data": "", "figure_id": "fig_0", "figure_la...
[{"Category": "Data Source", "Citation": "(Mitchell et al., 2023)", "Explanation": "The cited work provides a benchmark dataset for evaluating the performance of human and machine-generated text classification models."}, {"Category": "Data Source", "Citation": "(Wang et al., 2024)", "Explanation": "The cited work serve...
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b1", "b21", "b3", "b21", "b31", "b16", "b25", "b16", "b29", "b16", "b25" ], "table_ref": [], "text": "Large language models (LLMs) (Devlin...
2023-11-06
10.18653/v1/N19-1423
[ { "authors": "Deborah A Dahl; Madeleine Bates; Michael Brown; William Fisher; Kate Hunicke-Smith; David Pallett; Christine Pao; Alexander Rudnicky; Elizabeth Shriberg", "journal": "", "ref_id": "b0", "title": "Expanding the scope of the ATIS task: The ATIS-3 corpus", "year": "1994-03-08" }, ...
[ { "formula_coordinates": [ 2, 313.81, 763.54, 211.33, 10.84 ], "formula_id": "formula_0", "formula_text": "y test ∼ P LM (• | x 1 , y 1 , . . . , x K , y K , x test ) (1)" }, { "formula_coordinates": [ 3, 105.52, 175.83, 130.66, 15....
Coverage-based Example Selection for In-Context Learning
In-context learning (ICL), the ability of large language models to perform novel tasks by conditioning on a prompt with a few task examples, requires these examples to be informative about the test instance. The standard approach of independently ranking and selecting the most similar examples selects redundant example...
Shivanshu Gupta; Matt Gardner; Sameer Singh; Sweta Agrawal; Chunting Zhou; Mike Lewis; Luke Zettlemoyer; Marjan 2022 Ghazvininejad; Jacob Andreas; John Bufe; David Burkett; Charles Chen; Josh Clausman; Jean Crawford; Kate Crim; Jordan Deloach; Leah Dorner; Jason Eisner; Hao Fang; Alan Guo; David Hall; Kristin Hayes; Ke...
[ { "figure_caption": "Figure 1 :1Figure 1: (a) Test input with salient aspects highlighted. (a) Independently selecting similar examples leads to redundancy and failure to demonstrate all salient aspects, in this case, the need to identify the manager. (b) Coverage-based selection using SET-BSR mitigates this by...
[{"Category": "Methodological Basis", "Citation": "(Devlin et al., 2019)", "Explanation": "The cited work by Devlin et al. introduces the concept of large language models (LLMs), which serves as the basis for the research conducted in the citing paper on in-context learning (ICL). The study builds upon the capabilities...
[ { "figure_ref": [ "fig_1" ], "heading": "Introduction", "publication_ref": [ "b11", "b40", "b6", "b27", "b22", "b26", "b31", "b30", "b39", "b14", "b2", "b14", "b19", "b34", "b3", "b34", "b19",...
2023-05-24
10.18653/v1/2021.naacl-main.44
[ { "authors": "Rami Aly; Christos Christodoulopoulos; Oana Cocarascu; Zhijiang ", "journal": "", "ref_id": "b0", "title": "Proceedings of the Fourth Workshop on Fact Extraction and VERification (FEVER). Association for Computational Linguistics", "year": "2021" }, { "authors": "Raviteja A...
[ { "formula_coordinates": [ 2, 354.02, 731.04, 171.12, 11.62 ], "formula_id": "formula_0", "formula_text": "Attr (y,A) = avg s∈y Attr (s,A)(2)" }, { "formula_coordinates": [ 3, 98.17, 149.07, 191.7, 24.67 ], "formula_id": "fo...
PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions
The remarkable capabilities of large language models have been accompanied by a persistent drawback: the generation of false and unsubstantiated claims commonly known as "hallucinations". To combat this issue, recent research has introduced approaches that involve editing and attributing the outputs of language models,...
Anthony Chen; Panupong Pasupat; Sameer Singh; Hongrae Lee; Kelvin Guu; James W Marshall; John Sutter's Carpenter; John Sutter; James W Marshall'
[ { "figure_caption": "Training PURR. Given a seed query, we search for relevant evidence and summarize them into a claim which we corrupt. PURR is trained to denoise the corruption conditioned on the evidence.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" ...
[{"Category": "Supporting Evidence", "Citation": "(Bai et al., 2022)", "Explanation": "The cited work by Bai et al. provides a method for few-shot prompting of language models to perform editing tasks, which the citing paper uses to highlight the advantages of posthoc methods in text editing."}, {"Category": "Supportin...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b18", "b19", "b59", "b41", "b55", "b23", "b54", "b52", "b53", "b30", "b32", "b47", "b54", "b12", "b17", "b21", "b46", "b36", ...
2023-11-02
[ { "authors": "Anthony Michael Ahn; Noah Brohan; Yevgen Brown; Omar Chebotar; Byron Cortes; Chelsea David; Keerthana Finn; Karol Gopalakrishnan; Alex Hausman; Herzog", "journal": "", "ref_id": "b0", "title": "Do as i can, not as i say: Grounding language in robotic affordances", "year": "2022" ...
[ { "formula_coordinates": [ 20, 123.27, 219.32, 7.31, 100.88 ], "formula_id": "formula_0", "formula_text": "→ → → → → → → → → → → → →" }, { "formula_coordinates": [ 36, 123.17, 386.54, 148.16, 22.44 ], "formula_id": "formula_...
Leveraging Pre-trained Large Language Models to Construct and Utilize World Models for Model-based Task Planning
There is a growing interest in applying pre-trained large language models (LLMs) to planning problems. However, methods that use LLMs directly as planners are currently impractical due to several factors, including limited correctness of plans, strong reliance on feedback from interactions with simulators or even the a...
Lin Guan; Karthik Valmeekam; Sarath Sreedharan; Subbarao Kambhampati
[ { "figure_caption": "Figure 1 :1Figure 1: An overview of our framework and existing methods that use LLMs directly as planners.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The prompt template for PDD...
[{"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work by [19] has showcased remarkable performance in natural language processing tasks, which the citing paper leverages to improve the planning capabilities of LLMs."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanati...
[ { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Introduction", "publication_ref": [ "b0", "b22", "b33", "b3", "b8", "b18", "b26", "b31", "b32", "b11", "b13", "b10", "b12", "b13", "b10", "b12...
[ { "authors": "A Anandkumar; R Ge; D Hsu; S M Kakade; M Telgarsky", "journal": "The Journal of Machine Learning Research", "ref_id": "b0", "title": "Tensor decompositions for learning latent variable models", "year": "2014" }, { "authors": "H Attouch; J Bolte; P Redont; A Soubeyran", ...
[ { "formula_coordinates": [ 3, 187.8, 680.53, 236.41, 20.09 ], "formula_id": "formula_0", "formula_text": "Z(i 1 , i 3 , j 3 ) = I2 i2=1 I4 i4=1 X (i 1 , i 2 , i 3 , i 4 )Y(i 2 , i 4 , j 3 )." }, { "formula_coordinates": [ 4, 97.2, 219.04, ...
SVDinsTN: A Tensor Network Paradigm for Efficient Structure Search from Regularized Modeling Perspective
Tensor network (TN) representation is a powerful technique for computer vision and machine learning. TN structure search (TN-SS) aims to search for a customized structure to achieve a compact representation, which is a challenging NP-hard problem. Recent "sampling-evaluation"-based methods require sampling an extensive...
Yu-Bang Zheng; Xi-Le Zhao; Junhua Zeng; Chao Li; Qibin Zhao; Heng-Chao Li; Ting-Zhu Huang
[ { "figure_caption": "Figure 1 :1Figure 1: (a) A graphical illustration of SVD. (b) A graphical illustration of SVD-inspired TN decomposition on a fifth-order tensor. (c) Comparison of the compression ratio (↓) and run time (↓) of different methods on a fifth-order light field image Knights, where the reconstruc...
[{"Category": "Methodological Basis", "Citation": "[12,14]", "Explanation": "The cited works provide a theoretical foundation for the challenging and NP-hard problem of TN structure search, which the citing paper aims to address in its research on the same topic."}, {"Category": "Methodological Basis", "Citation": "[13...
[ { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Introduction", "publication_ref": [ "b29", "b32", "b26", "b23", "b37", "b51", "b34", "b20", "b39", "b39", "b34", "b20", "b28", "b54", "b52" ], ...
2023-05-24
10.18653/v1/2020.emnlp-main.618
[ { "authors": "", "journal": "Wu and Dredze", "ref_id": "b0", "title": "mBERT based methods", "year": "2019" }, { "authors": "( Advce; Keung", "journal": "", "ref_id": "b1", "title": "", "year": "2019" }, { "authors": " Tsl (wu", "journal": "", "ref_id": "...
[ { "formula_coordinates": [ 3, 70.87, 358.11, 454.27, 417.51 ], "formula_id": "formula_0", "formula_text": "den representations h = {h i } L i=1 of each sentence x = {x i } L i=1 ∈ D: h = F(x).(1)" }, { "formula_coordinates": [ 3, 306.14, 503.58...
CoLaDa: A Collaborative Label Denoising Framework for Cross-lingual Named Entity Recognition
Cross-lingual named entity recognition (NER) aims to train an NER system that generalizes well to a target language by leveraging labeled data in a given source language. Previous work alleviates the data scarcity problem by translating source-language labeled data or performing knowledge distillation on target-languag...
Tingting Ma; Qianhui Wu; Huiqiang Jiang; Börje F Karlsson; Tiejun Zhao; Chin-Yew Lin
[ { "figure_caption": "Figure 1 :1Figure 1: Comparison between previous methods (a/b/c) and our CoLaDa at the i-th iteration (d) CoLaDa starts at M 0 tgt and performs denoising iteratively. D src : Sourcelanguage labeled data. D trans : Translation data. D tgt : Target-language unlabeled data with pseudo-labels g...
[{"Category": "Methodological Basis", "Citation": "(Wu et al., 2020a)", "Explanation": "The cited work, UniTrans, is utilized in the citing paper to finetune a weak model for annotation of unlabeled target-language data, providing a methodological basis for the research conducted in the citing paper."}, {"Category": "M...
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b44", "b19", "b28", "b38", "b26", "b34", "b25", "b37", "b21", "b33", "b6", "b16", "b18", "b27", "b12" ], "table_ref": [ "tab_0" ...
2023-05-24
10.21227/c6tm-vw12
[ { "authors": "N Audebert; B Le Saux; S Lefèvre", "journal": "", "ref_id": "b0", "title": "Joint learning from earth observation and openstreetmap data to get faster better semantic maps", "year": "2017" }, { "authors": "N Audebert; B Le Saux; S Lefèvre", "journal": "ISPRS journal of ...
[ { "formula_coordinates": [ 4, 314.61, 281.48, 203.37, 64.99 ], "formula_id": "formula_0", "formula_text": ".111止--------一------ 1111111■■ 一------一-一-------1111 •• _____-一----- 600 400 200 0 .11111111■■---一一一一一一一-一- 1111111■■■-一一---- 111111■■-----------一-" }, { "...
GAMUS: A Geometry-aware Multi-modal Semantic Segmentation Benchmark for Remote Sensing Data
Geometric information in the normalized digital surface models (nDSM) is highly correlated with the semantic class of the land cover. Exploiting two modalities (RGB and nDSM (height)) jointly has great potential to improve the segmentation performance. However, it is still an under-explored field in remote sensing due ...
Zhitong Xiong; Sining Chen; Yi Wang; Lichao Mou; Xiao Xiang Zhu
[ { "figure_caption": "Fig. 1 :1Fig. 1: Example images of the GAMUS dataset. Images from left to right are the RGB modality, the nDSM modality, the blending visualization image, and the segmentation label.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" },...
[{"Category": "Methodological Basis", "Citation": "(Zhu et al., 2017)", "Explanation": "The cited work by Zhu et al. (2017) is referenced to highlight the importance of semantic segmentation in the fields of computer vision and remote sensing, providing a foundational basis for the citing paper to build upon."}, {"Cate...