arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,305.07507 | LeXFiles and LegalLAMA: Facilitating English Multinational Legal
Language Model Development | ['Ilias Chalkidis', 'Nicolas Garneau', 'Catalina Goanta', 'Daniel Martin Katz', 'Anders Søgaard'] | ['cs.CL'] | In this work, we conduct a detailed analysis on the performance of
legal-oriented pre-trained language models (PLMs). We examine the interplay
between their original objective, acquired knowledge, and legal language
understanding capacities which we define as the upstream, probing, and
downstream performance, respectiv... | 2023-05-12T14:21:38Z | 9 pages, long paper at ACL 2023 proceedings | null | null | LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development | ['Ilias Chalkidis', 'Nicolas Garneau', 'Catalina Goanta', 'D. Katz', 'Anders Søgaard'] | 2,023 | Annual Meeting of the Association for Computational Linguistics | 64 | 57 | ['Computer Science'] |
2,305.07759 | TinyStories: How Small Can Language Models Be and Still Speak Coherent
English? | ['Ronen Eldan', 'Yuanzhi Li'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Language models (LMs) are powerful tools for natural language processing, but
they often struggle to produce coherent and fluent text when they are small.
Models with around 125M parameters such as GPT-Neo (small) or GPT-2 (small) can
rarely generate coherent and consistent English text beyond a few words even
after ex... | 2023-05-12T20:56:48Z | null | null | null | null | null | null | null | null | null | null |
2,305.07922 | CodeT5+: Open Code Large Language Models for Code Understanding and
Generation | ['Yue Wang', 'Hung Le', 'Akhilesh Deepak Gotmare', 'Nghi D. Q. Bui', 'Junnan Li', 'Steven C. H. Hoi'] | ['cs.CL', 'cs.LG', 'cs.PL'] | Large language models (LLMs) pretrained on vast source code have achieved
prominent progress in code intelligence. However, existing code LLMs have two
main limitations in terms of architecture and pretraining tasks. First, they
often adopt a specific architecture (encoder-only or decoder-only) or rely on a
unified enc... | 2023-05-13T14:23:07Z | 26 pages, preprint | null | null | CodeT5+: Open Code Large Language Models for Code Understanding and Generation | ['Yue Wang', 'Hung Le', 'Akhilesh Deepak Gotmare', 'Nghi D. Q. Bui', 'Junnan Li', 'Steven C. H. Hoi'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 504 | 71 | ['Computer Science'] |
2,305.08227 | DeepFilterNet: Perceptually Motivated Real-Time Speech Enhancement | ['Hendrik Schröter', 'Tobias Rosenkranz', 'Alberto N. Escalante-B.', 'Andreas Maier'] | ['eess.AS', 'cs.CL', 'cs.SD'] | Multi-frame algorithms for single-channel speech enhancement are able to take
advantage from short-time correlations within the speech signal. Deep Filtering
(DF) was proposed to directly estimate a complex filter in frequency domain to
take advantage of these correlations. In this work, we present a real-time
speech e... | 2023-05-14T19:09:35Z | Accepted as show and tell demo to interspeech 2023 | null | null | DeepFilterNet: Perceptually Motivated Real-Time Speech Enhancement | ['Hendrik Schröter', 'T. Rosenkranz', 'Alberto N. Escalante', 'Andreas Maier'] | 2,023 | Interspeech | 20 | 11 | ['Engineering', 'Computer Science'] |
2,305.08322 | C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for
Foundation Models | ['Yuzhen Huang', 'Yuzhuo Bai', 'Zhihao Zhu', 'Junlei Zhang', 'Jinghan Zhang', 'Tangjun Su', 'Junteng Liu', 'Chuancheng Lv', 'Yikai Zhang', 'Jiayi Lei', 'Yao Fu', 'Maosong Sun', 'Junxian He'] | ['cs.CL'] | New NLP benchmarks are urgently needed to align with the rapid development of
large language models (LLMs). We present C-Eval, the first comprehensive
Chinese evaluation suite designed to assess advanced knowledge and reasoning
abilities of foundation models in a Chinese context. C-Eval comprises
multiple-choice questi... | 2023-05-15T03:20:19Z | NeurIPS 2023. Website: https://cevalbenchmark.com | null | null | null | null | null | null | null | null | null |
2,305.08455 | Document Understanding Dataset and Evaluation (DUDE) | ['Jordy Van Landeghem', 'Rubén Tito', 'Łukasz Borchmann', 'Michał Pietruszka', 'Paweł Józiak', 'Rafał Powalski', 'Dawid Jurkiewicz', 'Mickaël Coustaty', 'Bertrand Ackaert', 'Ernest Valveny', 'Matthew Blaschko', 'Sien Moens', 'Tomasz Stanisławek'] | ['cs.CV', 'cs.CL', 'cs.LG'] | We call on the Document AI (DocAI) community to reevaluate current
methodologies and embrace the challenge of creating more practically-oriented
benchmarks. Document Understanding Dataset and Evaluation (DUDE) seeks to
remediate the halted research progress in understanding visually-rich documents
(VRDs). We present a ... | 2023-05-15T08:54:32Z | Accepted at ICCV 2023 | null | null | null | null | null | null | null | null | null |
2,305.08891 | Common Diffusion Noise Schedules and Sample Steps are Flawed | ['Shanchuan Lin', 'Bingchen Liu', 'Jiashi Li', 'Xiao Yang'] | ['cs.CV'] | We discover that common diffusion noise schedules do not enforce the last
timestep to have zero signal-to-noise ratio (SNR), and some implementations of
diffusion samplers do not start from the last timestep. Such designs are flawed
and do not reflect the fact that the model is given pure Gaussian noise at
inference, c... | 2023-05-15T12:21:08Z | null | null | null | Common Diffusion Noise Schedules and Sample Steps are Flawed | ['Shanchuan Lin', 'Bingchen Liu', 'Jiashi Li', 'Xiao Yang'] | 2,023 | IEEE Workshop/Winter Conference on Applications of Computer Vision | 229 | 16 | ['Computer Science'] |
2,305.09137 | Pre-Training to Learn in Context | ['Yuxian Gu', 'Li Dong', 'Furu Wei', 'Minlie Huang'] | ['cs.CL'] | In-context learning, where pre-trained language models learn to perform tasks
from task examples and instructions in their contexts, has attracted much
attention in the NLP community. However, the ability of in-context learning is
not fully exploited because language models are not explicitly trained to learn
in contex... | 2023-05-16T03:38:06Z | ACL2023 Main Conference | null | null | null | null | null | null | null | null | null |
2,305.09148 | Dual-Alignment Pre-training for Cross-lingual Sentence Embedding | ['Ziheng Li', 'Shaohan Huang', 'Zihan Zhang', 'Zhi-Hong Deng', 'Qiang Lou', 'Haizhen Huang', 'Jian Jiao', 'Furu Wei', 'Weiwei Deng', 'Qi Zhang'] | ['cs.CL', 'cs.AI'] | Recent studies have shown that dual encoder models trained with the
sentence-level translation ranking task are effective methods for cross-lingual
sentence embedding. However, our research indicates that token-level alignment
is also crucial in multilingual scenarios, which has not been fully explored
previously. Base... | 2023-05-16T03:53:30Z | ACL 2023 | null | null | Dual-Alignment Pre-training for Cross-lingual Sentence Embedding | ['Ziheng Li', 'Shaohan Huang', 'Zi-qiang Zhang', 'Zhi-Hong Deng', 'Qiang Lou', 'Haizhen Huang', 'Jian Jiao', 'Furu Wei', 'Weiwei Deng', 'Qi Zhang'] | 2,023 | Annual Meeting of the Association for Computational Linguistics | 11 | 30 | ['Computer Science'] |
2,305.09167 | Adversarial Speaker Disentanglement Using Unannotated External Data for
Self-supervised Representation Based Voice Conversion | ['Xintao Zhao', 'Shuai Wang', 'Yang Chao', 'Zhiyong Wu', 'Helen Meng'] | ['cs.SD', 'cs.CL', 'eess.AS'] | Nowadays, recognition-synthesis-based methods have been quite popular with
voice conversion (VC). By introducing linguistics features with good
disentangling characters extracted from an automatic speech recognition (ASR)
model, the VC performance achieved considerable breakthroughs. Recently,
self-supervised learning ... | 2023-05-16T04:52:29Z | Accepted by ICME 2023 | null | null | Adversarial Speaker Disentanglement Using Unannotated External Data for Self-supervised Representation-based Voice Conversion | ['Xintao Zhao', 'Shuai Wang', 'Yang Chao', 'Zhiyong Wu', 'H. Meng'] | 2,023 | IEEE International Conference on Multimedia and Expo | 3 | 20 | ['Computer Science', 'Engineering'] |
2,305.09617 | Towards Expert-Level Medical Question Answering with Large Language
Models | ['Karan Singhal', 'Tao Tu', 'Juraj Gottweis', 'Rory Sayres', 'Ellery Wulczyn', 'Le Hou', 'Kevin Clark', 'Stephen Pfohl', 'Heather Cole-Lewis', 'Darlene Neal', 'Mike Schaekermann', 'Amy Wang', 'Mohamed Amin', 'Sami Lachgar', 'Philip Mansfield', 'Sushant Prakash', 'Bradley Green', 'Ewa Dominowska', 'Blaise Aguera y Arcas... | ['cs.CL', 'cs.AI', 'cs.LG'] | Recent artificial intelligence (AI) systems have reached milestones in "grand
challenges" ranging from Go to protein-folding. The capability to retrieve
medical knowledge, reason over it, and answer medical questions comparably to
physicians has long been viewed as one such grand challenge.
Large language models (LLM... | 2023-05-16T17:11:29Z | null | null | null | null | null | null | null | null | null | null |
2,305.09636 | SoundStorm: Efficient Parallel Audio Generation | ['Zalán Borsos', 'Matt Sharifi', 'Damien Vincent', 'Eugene Kharitonov', 'Neil Zeghidour', 'Marco Tagliasacchi'] | ['cs.SD', 'cs.LG', 'eess.AS'] | We present SoundStorm, a model for efficient, non-autoregressive audio
generation. SoundStorm receives as input the semantic tokens of AudioLM, and
relies on bidirectional attention and confidence-based parallel decoding to
generate the tokens of a neural audio codec. Compared to the autoregressive
generation approach ... | 2023-05-16T17:41:25Z | null | null | null | null | null | null | null | null | null | null |
2,305.09652 | The Interpreter Understands Your Meaning: End-to-end Spoken Language
Understanding Aided by Speech Translation | ['Mutian He', 'Philip N. Garner'] | ['cs.CL', 'cs.SD', 'eess.AS'] | End-to-end spoken language understanding (SLU) remains elusive even with
current large pretrained language models on text and speech, especially in
multilingual cases. Machine translation has been established as a powerful
pretraining objective on text as it enables the model to capture high-level
semantics of the inpu... | 2023-05-16T17:53:03Z | 16 pages, 3 figures; accepted by Findings of EMNLP 2023 | null | null | null | null | null | null | null | null | null |
2,305.09688 | OOD-Speech: A Large Bengali Speech Recognition Dataset for
Out-of-Distribution Benchmarking | ['Fazle Rabbi Rakib', 'Souhardya Saha Dip', 'Samiul Alam', 'Nazia Tasnim', 'Md. Istiak Hossain Shihab', 'Md. Nazmuddoha Ansary', 'Syed Mobassir Hossen', 'Marsia Haque Meghla', 'Mamunur Mamun', 'Farig Sadeque', 'Sayma Sultana Chowdhury', 'Tahsin Reasat', 'Asif Sushmit', 'Ahmed Imtiaz Humayun'] | ['eess.AS', 'cs.CL', 'cs.LG'] | We present OOD-Speech, the first out-of-distribution (OOD) benchmarking
dataset for Bengali automatic speech recognition (ASR). Being one of the most
spoken languages globally, Bengali portrays large diversity in dialects and
prosodic features, which demands ASR frameworks to be robust towards
distribution shifts. For ... | 2023-05-15T18:00:39Z | null | null | null | null | null | null | null | null | null | null |
2,305.0969 | A Whisper transformer for audio captioning trained with synthetic
captions and transfer learning | ['Marek Kadlčík', 'Adam Hájek', 'Jürgen Kieslich', 'Radosław Winiecki'] | ['cs.SD', 'cs.LG', 'eess.AS'] | The field of audio captioning has seen significant advancements in recent
years, driven by the availability of large-scale audio datasets and
advancements in deep learning techniques. In this technical report, we present
our approach to audio captioning, focusing on the use of a pretrained
speech-to-text Whisper model ... | 2023-05-15T22:20:07Z | null | null | null | A Whisper transformer for audio captioning trained with synthetic captions and transfer learning | ['Marek Kadlcík', "Adam H'ajek", 'Jürgen Kieslich', 'Radoslaw Winiecki'] | 2,023 | arXiv.org | 11 | 9 | ['Computer Science', 'Engineering'] |
2,305.09731 | What In-Context Learning "Learns" In-Context: Disentangling Task
Recognition and Task Learning | ['Jane Pan', 'Tianyu Gao', 'Howard Chen', 'Danqi Chen'] | ['cs.CL', 'cs.LG'] | Large language models (LLMs) exploit in-context learning (ICL) to solve tasks
with only a few demonstrations, but its mechanisms are not yet well-understood.
Some works suggest that LLMs only recall already learned concepts from
pre-training, while others hint that ICL performs implicit learning over
demonstrations. We... | 2023-05-16T18:05:19Z | Accepted to Findings of ACL 2023; The code is available at
https://github.com/princeton-nlp/WhatICLLearns | null | null | What In-Context Learning "Learns" In-Context: Disentangling Task Recognition and Task Learning | ['Jane Pan', 'Tianyu Gao', 'Howard Chen', 'Danqi Chen'] | 2,023 | Annual Meeting of the Association for Computational Linguistics | 128 | 31 | ['Computer Science'] |
2,305.09781 | SpecInfer: Accelerating Generative Large Language Model Serving with
Tree-based Speculative Inference and Verification | ['Xupeng Miao', 'Gabriele Oliaro', 'Zhihao Zhang', 'Xinhao Cheng', 'Zeyu Wang', 'Zhengxin Zhang', 'Rae Ying Yee Wong', 'Alan Zhu', 'Lijie Yang', 'Xiaoxiang Shi', 'Chunan Shi', 'Zhuoming Chen', 'Daiyaan Arfeen', 'Reyna Abhyankar', 'Zhihao Jia'] | ['cs.CL', 'cs.DC', 'cs.LG'] | This paper introduces SpecInfer, a system that accelerates generative large
language model (LLM) serving with tree-based speculative inference and
verification. The key idea behind SpecInfer is leveraging small speculative
models to predict the LLM's outputs; the predictions are organized as a token
tree, whose nodes e... | 2023-05-16T20:12:59Z | ASPLOS'24 | null | 10.1145/3620666.3651335 | null | null | null | null | null | null | null |
2,305.09857 | CoEdIT: Text Editing by Task-Specific Instruction Tuning | ['Vipul Raheja', 'Dhruv Kumar', 'Ryan Koo', 'Dongyeop Kang'] | ['cs.CL', 'cs.AI', 'I.2.7'] | We introduce CoEdIT, a state-of-the-art text editing system for writing
assistance. CoEdIT takes instructions from the user specifying the attributes
of the desired text, such as "Make the sentence simpler" or "Write it in a more
neutral style," and outputs the edited text. We present a large language model
fine-tuned ... | 2023-05-17T00:05:24Z | Accepted to EMNLP 2023 (Findings). 18 pages, 13 tables, 2 figures | null | null | null | null | null | null | null | null | null |
2,305.09972 | Real-Time Flying Object Detection with YOLOv8 | ['Dillon Reis', 'Jordan Kupec', 'Jacqueline Hong', 'Ahmad Daoudi'] | ['cs.CV', 'cs.LG', 'I.2.10; I.2.6'] | This paper presents a generalized model for real-time detection of flying
objects that can be used for transfer learning and further research, as well as
a refined model that achieves state-of-the-art results for flying object
detection. We achieve this by training our first (generalized) model on a data
set containing... | 2023-05-17T06:11:10Z | 10 pages, 7 figures | null | null | null | null | null | null | null | null | null |
2,305.10005 | DinoSR: Self-Distillation and Online Clustering for Self-supervised
Speech Representation Learning | ['Alexander H. Liu', 'Heng-Jui Chang', 'Michael Auli', 'Wei-Ning Hsu', 'James R. Glass'] | ['cs.CL'] | In this paper, we introduce self-distillation and online clustering for
self-supervised speech representation learning (DinoSR) which combines masked
language modeling, self-distillation, and online clustering. We show that these
concepts complement each other and result in a strong representation learning
model for sp... | 2023-05-17T07:23:46Z | null | null | null | null | null | null | null | null | null | null |
2,305.10149 | Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog | ['Fanqi Wan', 'Weizhou Shen', 'Ke Yang', 'Xiaojun Quan', 'Wei Bi'] | ['cs.CL'] | Retrieving proper domain knowledge from an external database lies at the
heart of end-to-end task-oriented dialog systems to generate informative
responses. Most existing systems blend knowledge retrieval with response
generation and optimize them with direct supervision from reference responses,
leading to suboptimal ... | 2023-05-17T12:12:46Z | Accepted to ACL 2023 (Main Conference) | null | null | null | null | null | null | null | null | null |
2,305.10314 | LeTI: Learning to Generate from Textual Interactions | ['Xingyao Wang', 'Hao Peng', 'Reyhaneh Jabbarvand', 'Heng Ji'] | ['cs.CL', 'cs.AI', 'cs.SE'] | Fine-tuning pre-trained language models (LMs) is essential for enhancing
their capabilities. Existing techniques commonly fine-tune on input-output
pairs (e.g., instruction tuning) or with numerical rewards that gauge the
output quality (e.g., RLHF). We explore LMs' potential to learn from textual
interactions (LETI) t... | 2023-05-17T15:53:31Z | NAACL 2024 Findings | null | null | null | null | null | null | null | null | null |
2,305.10355 | Evaluating Object Hallucination in Large Vision-Language Models | ['Yifan Li', 'Yifan Du', 'Kun Zhou', 'Jinpeng Wang', 'Wayne Xin Zhao', 'Ji-Rong Wen'] | ['cs.CV', 'cs.CL', 'cs.MM'] | Inspired by the superior language abilities of large language models (LLM),
large vision-language models (LVLM) have been recently explored by integrating
powerful LLMs for improving the performance on complex multimodal tasks.
Despite the promising progress on LVLMs, we find that LVLMs suffer from the
hallucination pr... | 2023-05-17T16:34:01Z | Accepted to EMNLP 2023 | null | null | null | null | null | null | null | null | null |
2,305.10424 | ZeroFlow: Scalable Scene Flow via Distillation | ['Kyle Vedder', 'Neehar Peri', 'Nathaniel Chodosh', 'Ishan Khatri', 'Eric Eaton', 'Dinesh Jayaraman', 'Yang Liu', 'Deva Ramanan', 'James Hays'] | ['cs.CV', 'cs.LG'] | Scene flow estimation is the task of describing the 3D motion field between
temporally successive point clouds. State-of-the-art methods use strong priors
and test-time optimization techniques, but require on the order of tens of
seconds to process full-size point clouds, making them unusable as computer
vision primiti... | 2023-05-17T17:56:59Z | Accepted to ICLR 2024. 9 pages, 4 pages of citations, 6 pages of
Supplemental. Project page with data releases is at
http://vedder.io/zeroflow.html | null | null | null | null | null | null | null | null | null |
2,305.10425 | SLiC-HF: Sequence Likelihood Calibration with Human Feedback | ['Yao Zhao', 'Rishabh Joshi', 'Tianqi Liu', 'Misha Khalman', 'Mohammad Saleh', 'Peter J. Liu'] | ['cs.CL', 'cs.AI'] | Learning from human feedback has been shown to be effective at aligning
language models with human preferences. Past work has often relied on
Reinforcement Learning from Human Feedback (RLHF), which optimizes the language
model using reward scores assigned from a reward model trained on human
preference data. In this w... | 2023-05-17T17:57:10Z | null | null | null | SLiC-HF: Sequence Likelihood Calibration with Human Feedback | ['Yao Zhao', 'Rishabh Joshi', 'Tianqi Liu', 'Misha Khalman', 'Mohammad Saleh', 'Peter J. Liu'] | 2,023 | arXiv.org | 307 | 22 | ['Computer Science'] |
2,305.10472 | Nine tips for ecologists using machine learning | ['Marine Desprez', 'Vincent Miele', 'Olivier Gimenez'] | ['q-bio.PE', 'cs.LG'] | Due to their high predictive performance and flexibility, machine learning
models are an appropriate and efficient tool for ecologists. However,
implementing a machine learning model is not yet a trivial task and may seem
intimidating to ecologists with no previous experience in this area. Here we
provide a series of t... | 2023-05-17T15:41:08Z | null | null | null | Nine tips for ecologists using machine learning | ['Marine Desprez', 'Vincent Miele', 'O. Gimenez'] | 2,023 | arXiv.org | 3 | 72 | ['Biology', 'Computer Science'] |
2,305.10615 | ML-SUPERB: Multilingual Speech Universal PERformance Benchmark | ['Jiatong Shi', 'Dan Berrebbi', 'William Chen', 'Ho-Lam Chung', 'En-Pei Hu', 'Wei Ping Huang', 'Xuankai Chang', 'Shang-Wen Li', 'Abdelrahman Mohamed', 'Hung-yi Lee', 'Shinji Watanabe'] | ['cs.SD', 'cs.CL', 'eess.AS'] | Speech processing Universal PERformance Benchmark (SUPERB) is a leaderboard
to benchmark the performance of Self-Supervised Learning (SSL) models on
various speech processing tasks. However, SUPERB largely considers English
speech in its evaluation. This paper presents multilingual SUPERB (ML-SUPERB),
covering 143 lang... | 2023-05-18T00:01:27Z | Accepted by Interspeech | null | null | ML-SUPERB: Multilingual Speech Universal PERformance Benchmark | ['Jiatong Shi', 'Dan Berrebbi', 'William Chen', 'Ho-Lam Chung', 'En-Pei Hu', 'Wei Huang', 'Xuankai Chang', 'Shang-Wen Li', 'Abdel-rahman Mohamed', 'Hung-yi Lee', 'Shinji Watanabe'] | 2,023 | Interspeech | 70 | 48 | ['Computer Science', 'Engineering'] |
2,305.10703 | ReGen: Zero-Shot Text Classification via Training Data Generation with
Progressive Dense Retrieval | ['Yue Yu', 'Yuchen Zhuang', 'Rongzhi Zhang', 'Yu Meng', 'Jiaming Shen', 'Chao Zhang'] | ['cs.CL', 'cs.IR', 'cs.LG'] | With the development of large language models (LLMs), zero-shot learning has
attracted much attention for various NLP tasks. Different from prior works that
generate training data with billion-scale natural language generation (NLG)
models, we propose a retrieval-enhanced framework to create training data from
a genera... | 2023-05-18T04:30:09Z | ACL 2023 Findings (Code: https://github.com/yueyu1030/ReGen) | null | null | null | null | null | null | null | null | null |
2,305.10853 | LDM3D: Latent Diffusion Model for 3D | ['Gabriela Ben Melech Stan', 'Diana Wofk', 'Scottie Fox', 'Alex Redden', 'Will Saxton', 'Jean Yu', 'Estelle Aflalo', 'Shao-Yen Tseng', 'Fabio Nonato', 'Matthias Muller', 'Vasudev Lal'] | ['cs.CV'] | This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that
generates both image and depth map data from a given text prompt, allowing
users to generate RGBD images from text prompts. The LDM3D model is fine-tuned
on a dataset of tuples containing an RGB image, depth map and caption, and
validated through... | 2023-05-18T10:15:06Z | null | null | null | null | null | null | null | null | null | null |
2,305.10973 | Drag Your GAN: Interactive Point-based Manipulation on the Generative
Image Manifold | ['Xingang Pan', 'Ayush Tewari', 'Thomas Leimkühler', 'Lingjie Liu', 'Abhimitra Meka', 'Christian Theobalt'] | ['cs.CV', 'cs.GR'] | Synthesizing visual content that meets users' needs often requires flexible
and precise controllability of the pose, shape, expression, and layout of the
generated objects. Existing approaches gain controllability of generative
adversarial networks (GANs) via manually annotated training data or a prior 3D
model, which ... | 2023-05-18T13:41:25Z | Accepted to SIGGRAPH 2023. Project page:
https://vcai.mpi-inf.mpg.de/projects/DragGAN/ | null | 10.1145/3588432.3591500 | Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold | ['Xingang Pan', 'A. Tewari', 'Thomas Leimkühler', 'Lingjie Liu', 'Abhimitra Meka', 'C. Theobalt'] | 2,023 | International Conference on Computer Graphics and Interactive Techniques | 248 | 70 | ['Computer Science'] |
2,305.11 | SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal
Conversational Abilities | ['Dong Zhang', 'Shimin Li', 'Xin Zhang', 'Jun Zhan', 'Pengyu Wang', 'Yaqian Zhou', 'Xipeng Qiu'] | ['cs.CL'] | Multi-modal large language models are regarded as a crucial step towards
Artificial General Intelligence (AGI) and have garnered significant interest
with the emergence of ChatGPT. However, current speech-language models
typically adopt the cascade paradigm, preventing inter-modal knowledge
transfer. In this paper, we ... | 2023-05-18T14:23:25Z | work in progress | null | null | null | null | null | null | null | null | null |
2,305.11129 | mLongT5: A Multilingual and Efficient Text-To-Text Transformer for
Longer Sequences | ['David Uthus', 'Santiago Ontañón', 'Joshua Ainslie', 'Mandy Guo'] | ['cs.CL'] | We present our work on developing a multilingual, efficient text-to-text
transformer that is suitable for handling long inputs. This model, called
mLongT5, builds upon the architecture of LongT5, while leveraging the
multilingual datasets used for pretraining mT5 and the pretraining tasks of
UL2. We evaluate this model... | 2023-05-18T17:22:53Z | null | null | null | mLongT5: A Multilingual and Efficient Text-To-Text Transformer for Longer Sequences | ['David C. Uthus', "Santiago Ontan'on", 'J. Ainslie', 'Mandy Guo'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 12 | 18 | ['Computer Science'] |
2,305.11147 | UniControl: A Unified Diffusion Model for Controllable Visual Generation
In the Wild | ['Can Qin', 'Shu Zhang', 'Ning Yu', 'Yihao Feng', 'Xinyi Yang', 'Yingbo Zhou', 'Huan Wang', 'Juan Carlos Niebles', 'Caiming Xiong', 'Silvio Savarese', 'Stefano Ermon', 'Yun Fu', 'Ran Xu'] | ['cs.CV', 'cs.AI'] | Achieving machine autonomy and human control often represent divergent
objectives in the design of interactive AI systems. Visual generative
foundation models such as Stable Diffusion show promise in navigating these
goals, especially when prompted with arbitrary languages. However, they often
fall short in generating ... | 2023-05-18T17:41:34Z | NeurIPS 2023 | null | null | null | null | null | null | null | null | null |
2,305.11171 | TrueTeacher: Learning Factual Consistency Evaluation with Large Language
Models | ['Zorik Gekhman', 'Jonathan Herzig', 'Roee Aharoni', 'Chen Elkind', 'Idan Szpektor'] | ['cs.CL'] | Factual consistency evaluation is often conducted using Natural Language
Inference (NLI) models, yet these models exhibit limited success in evaluating
summaries. Previous work improved such models with synthetic training data.
However, the data is typically based on perturbed human-written summaries,
which often diffe... | 2023-05-18T17:58:35Z | Accepted as a long paper in EMNLP 2023 | null | null | TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models | ['Zorik Gekhman', 'Jonathan Herzig', 'Roee Aharoni', 'C. Elkind', 'Idan Szpektor'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 80 | 66 | ['Computer Science'] |
2,305.11206 | LIMA: Less Is More for Alignment | ['Chunting Zhou', 'Pengfei Liu', 'Puxin Xu', 'Srini Iyer', 'Jiao Sun', 'Yuning Mao', 'Xuezhe Ma', 'Avia Efrat', 'Ping Yu', 'Lili Yu', 'Susan Zhang', 'Gargi Ghosh', 'Mike Lewis', 'Luke Zettlemoyer', 'Omer Levy'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Large language models are trained in two stages: (1) unsupervised pretraining
from raw text, to learn general-purpose representations, and (2) large scale
instruction tuning and reinforcement learning, to better align to end tasks and
user preferences. We measure the relative importance of these two stages by
training ... | 2023-05-18T17:45:22Z | null | null | null | null | null | null | null | null | null | null |
2,305.11212 | Energy-Consumption Advantage of Quantum Computation | ['Florian Meier', 'Hayata Yamasaki'] | ['quant-ph'] | Energy consumption in solving computational problems has been gaining growing
attention as one of the key performance measures for computers. Quantum
computation is known to offer advantages over classical computation in terms of
various computational resources; however, proving its energy-consumption
advantage has bee... | 2023-05-18T18:00:00Z | 50 pages, 6 figures | PRX Energy 4, 023008 (2025) | 10.1103/PRXEnergy.4.023008 | Energy-Consumption Advantage of Quantum Computation | ['Florian Meier', 'H. Yamasaki'] | 2,023 | PRX Energy | 12 | 122 | ['Physics'] |
2,305.11255 | Reasoning Implicit Sentiment with Chain-of-Thought Prompting | ['Hao Fei', 'Bobo Li', 'Qian Liu', 'Lidong Bing', 'Fei Li', 'Tat-Seng Chua'] | ['cs.CL'] | While sentiment analysis systems try to determine the sentiment polarities of
given targets based on the key opinion expressions in input texts, in implicit
sentiment analysis (ISA) the opinion cues come in an implicit and obscure
manner. Thus detecting implicit sentiment requires the common-sense and
multi-hop reasoni... | 2023-05-18T18:38:32Z | ACL2023 Short Paper | null | null | Reasoning Implicit Sentiment with Chain-of-Thought Prompting | ['Hao Fei', 'Bobo Li', 'Qian Liu', 'Lidong Bing', 'Fei Li', 'Tat-seng Chua'] | 2,023 | Annual Meeting of the Association for Computational Linguistics | 104 | 42 | ['Computer Science'] |
2,305.11442 | Zero-Shot Text Classification via Self-Supervised Tuning | ['Chaoqun Liu', 'Wenxuan Zhang', 'Guizhen Chen', 'Xiaobao Wu', 'Anh Tuan Luu', 'Chip Hong Chang', 'Lidong Bing'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Existing solutions to zero-shot text classification either conduct prompting
with pre-trained language models, which is sensitive to the choices of
templates, or rely on large-scale annotated data of relevant tasks for
meta-tuning. In this work, we propose a new paradigm based on self-supervised
learning to solve zero-... | 2023-05-19T05:47:33Z | Accepted to the Findings of ACL 2023 | null | null | Zero-Shot Text Classification via Self-Supervised Tuning | ['Chaoqun Liu', 'Wenxuan Zhang', 'Guizhen Chen', 'Xiaobao Wu', 'A. Luu', 'Chip-Hong Chang', 'Lidong Bing'] | 2,023 | Annual Meeting of the Association for Computational Linguistics | 11 | 39 | ['Computer Science'] |
2,305.1152 | LaCon: Late-Constraint Diffusion for Steerable Guided Image Synthesis | ['Chang Liu', 'Rui Li', 'Kaidong Zhang', 'Xin Luo', 'Dong Liu'] | ['cs.CV'] | Diffusion models have demonstrated impressive abilities in generating
photo-realistic and creative images. To offer more controllability for the
generation process, existing studies, termed as early-constraint methods in
this paper, leverage extra conditions and incorporate them into pre-trained
diffusion models. Parti... | 2023-05-19T08:40:01Z | GitHub repo: https://github.com/AlonzoLeeeooo/LCDG | null | null | null | null | null | null | null | null | null |
2,305.11527 | InstructIE: A Bilingual Instruction-based Information Extraction Dataset | ['Honghao Gui', 'Shuofei Qiao', 'Jintian Zhang', 'Hongbin Ye', 'Mengshu Sun', 'Lei Liang', 'Jeff Z. Pan', 'Huajun Chen', 'Ningyu Zhang'] | ['cs.CL', 'cs.AI', 'cs.IR', 'cs.LG'] | Large language models can perform well on general natural language tasks, but
their effectiveness is still suboptimal for information extraction (IE). Recent
works indicate that the main reason lies in the lack of extensive data on IE
instructions. Note that the existing datasets on IE instructions not only have
limite... | 2023-05-19T08:51:11Z | ISWC 2024; project homepage:
https://www.zjukg.org/project/InstructIE/ dataset:
https://huggingface.co/datasets/zjunlp/InstructIE | null | null | null | null | null | null | null | null | null |
2,305.1156 | Brain Captioning: Decoding human brain activity into images and text | ['Matteo Ferrante', 'Furkan Ozcelik', 'Tommaso Boccato', 'Rufin VanRullen', 'Nicola Toschi'] | ['cs.CV', 'cs.AI'] | Every day, the human brain processes an immense volume of visual information,
relying on intricate neural mechanisms to perceive and interpret these stimuli.
Recent breakthroughs in functional magnetic resonance imaging (fMRI) have
enabled scientists to extract visual information from human brain activity
patterns. In ... | 2023-05-19T09:57:19Z | null | null | null | null | null | null | null | null | null | null |
2,305.11627 | LLM-Pruner: On the Structural Pruning of Large Language Models | ['Xinyin Ma', 'Gongfan Fang', 'Xinchao Wang'] | ['cs.CL'] | Large language models (LLMs) have shown remarkable capabilities in language
understanding and generation. However, such impressive capability typically
comes with a substantial model size, which presents significant challenges in
both the deployment, inference, and training stages. With LLM being a
general-purpose task... | 2023-05-19T12:10:53Z | Accepted at NeurIPS 2023 | null | null | null | null | null | null | null | null | null |
2,305.11685 | Recycle-and-Distill: Universal Compression Strategy for
Transformer-based Speech SSL Models with Attention Map Reusing and Masking
Distillation | ['Kangwook Jang', 'Sungnyun Kim', 'Se-Young Yun', 'Hoirin Kim'] | ['eess.AS', 'cs.CL', 'cs.LG'] | Transformer-based speech self-supervised learning (SSL) models, such as
HuBERT, show surprising performance in various speech processing tasks.
However, huge number of parameters in speech SSL models necessitate the
compression to a more compact model for wider usage in academia or small
companies. In this study, we su... | 2023-05-19T14:07:43Z | Proceedings of Interspeech 2023. Code URL:
https://github.com/sungnyun/ARMHuBERT | null | 10.21437/Interspeech.2023-1329 | Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation | ['Kangwook Jang', 'Sungnyun Kim', 'Se-Young Yun', 'Hoi-Rim Kim'] | 2,023 | Interspeech | 5 | 32 | ['Computer Science', 'Engineering'] |
2,305.11747 | HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large
Language Models | ['Junyi Li', 'Xiaoxue Cheng', 'Wayne Xin Zhao', 'Jian-Yun Nie', 'Ji-Rong Wen'] | ['cs.CL'] | Large language models (LLMs), such as ChatGPT, are prone to generate
hallucinations, i.e., content that conflicts with the source or cannot be
verified by the factual knowledge. To understand what types of content and to
which extent LLMs are apt to hallucinate, we introduce the Hallucination
Evaluation benchmark for L... | 2023-05-19T15:36:27Z | Accepted to EMNLP 2023 Main Conference (Long Paper) | null | null | null | null | null | null | null | null | null |
2,305.11772 | Neural Foundations of Mental Simulation: Future Prediction of Latent
Representations on Dynamic Scenes | ['Aran Nayebi', 'Rishi Rajalingham', 'Mehrdad Jazayeri', 'Guangyu Robert Yang'] | ['cs.AI', 'cs.CV', 'cs.RO', 'q-bio.NC'] | Humans and animals have a rich and flexible understanding of the physical
world, which enables them to infer the underlying dynamical trajectories of
objects and events, plausible future states, and use that to plan and
anticipate the consequences of actions. However, the neural mechanisms
underlying these computations... | 2023-05-19T15:56:06Z | 20 pages, 10 figures, NeurIPS 2023 Camera Ready Version (spotlight) | null | null | Neural Foundations of Mental Simulation: Future Prediction of Latent Representations on Dynamic Scenes | ['Aran Nayebi', 'R. Rajalingham', 'M. Jazayeri', 'G. R. Yang'] | 2,023 | Neural Information Processing Systems | 20 | 64 | ['Medicine', 'Computer Science', 'Biology'] |
2,305.11806 | The Inside Story: Towards Better Understanding of Machine Translation
Neural Evaluation Metrics | ['Ricardo Rei', 'Nuno M. Guerreiro', 'Marcos Treviso', 'Luisa Coheur', 'Alon Lavie', 'André F. T. Martins'] | ['cs.CL'] | Neural metrics for machine translation evaluation, such as COMET, exhibit
significant improvements in their correlation with human judgments, as compared
to traditional metrics based on lexical overlap, such as BLEU. Yet, neural
metrics are, to a great extent, "black boxes" returning a single sentence-level
score witho... | 2023-05-19T16:42:17Z | Accepted at ACL 2023 | null | null | null | null | null | null | null | null | null |
2,305.11846 | Any-to-Any Generation via Composable Diffusion | ['Zineng Tang', 'Ziyi Yang', 'Chenguang Zhu', 'Michael Zeng', 'Mohit Bansal'] | ['cs.CV', 'cs.CL', 'cs.LG', 'cs.SD', 'eess.AS'] | We present Composable Diffusion (CoDi), a novel generative model capable of
generating any combination of output modalities, such as language, image,
video, or audio, from any combination of input modalities. Unlike existing
generative AI systems, CoDi can generate multiple modalities in parallel and
its input is not l... | 2023-05-19T17:38:32Z | Project Page: https://codi-gen.github.io | null | null | Any-to-Any Generation via Composable Diffusion | ['Zineng Tang', 'Ziyi Yang', 'Chenguang Zhu', 'Michael Zeng', 'Mohit Bansal'] | 2,023 | Neural Information Processing Systems | 191 | 59 | ['Computer Science', 'Engineering'] |
2,305.11938 | XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented
Languages | ['Sebastian Ruder', 'Jonathan H. Clark', 'Alexander Gutkin', 'Mihir Kale', 'Min Ma', 'Massimo Nicosia', 'Shruti Rijhwani', 'Parker Riley', 'Jean-Michel A. Sarr', 'Xinyi Wang', 'John Wieting', 'Nitish Gupta', 'Anna Katanova', 'Christo Kirov', 'Dana L. Dickinson', 'Brian Roark', 'Bidisha Samanta', 'Connie Tao', 'David I.... | ['cs.CL'] | Data scarcity is a crucial issue for the development of highly multilingual
NLP systems. Yet for many under-represented languages (ULs) -- languages for
which NLP re-search is particularly far behind in meeting user needs -- it is
feasible to annotate small amounts of data. Motivated by this, we propose
XTREME-UP, a be... | 2023-05-19T18:00:03Z | null | null | 10.18653/v1/2023.findings-emnlp.125 | XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented Languages | ['Sebastian Ruder', 'J. Clark', 'Alexander Gutkin', 'Mihir Kale', 'Min Ma', 'M. Nicosia', 'Shruti Rijhwani', 'Parker Riley', 'J. M. Sarr', 'Xinyi Wang', 'J. Wieting', 'Nitish Gupta', 'Anna Katanova', 'Christo Kirov', 'Dana L. Dickinson', 'Brian Roark', 'Bidisha Samanta', 'Connie Tao', 'David Ifeoluwa Adelani', 'Vera Ax... | 2,023 | Conference on Empirical Methods in Natural Language Processing | 40 | 103 | ['Computer Science'] |
2,305.11952 | Self-QA: Unsupervised Knowledge Guided Language Model Alignment | ['Xuanyu Zhang', 'Qing Yang'] | ['cs.CL'] | Large-scale language models like ChatGPT and GPT-4 have gained attention for
their impressive conversational and generative capabilities. However, the
creation of supervised paired question-answering data for instruction tuning
presents formidable challenges. This endeavor necessitates substantial human
effort for data... | 2023-05-19T18:26:26Z | null | null | null | Self-QA: Unsupervised Knowledge Guided Language Model Alignment | ['Xuanyu Zhang', 'Qing Yang'] | 2,023 | arXiv.org | 12 | 12 | ['Computer Science'] |
2,305.11984 | OL-Transformer: A Fast and Universal Surrogate Simulator for Optical
Multilayer Thin Film Structures | ['Taigao Ma', 'Haozhu Wang', 'L. Jay Guo'] | ['cs.LG', 'physics.optics'] | Deep learning-based methods have recently been established as fast and
accurate surrogate simulators for optical multilayer thin film structures.
However, existing methods only work for limited types of structures with
different material arrangements, preventing their applications towards diverse
and universal structur... | 2023-05-19T20:05:07Z | 4 pages, 4 figures | null | null | null | null | null | null | null | null | null |
2,305.12002 | XuanYuan 2.0: A Large Chinese Financial Chat Model with Hundreds of
Billions Parameters | ['Xuanyu Zhang', 'Qing Yang', 'Dongliang Xu'] | ['cs.CL'] | In recent years, pre-trained language models have undergone rapid development
with the emergence of large-scale models. However, there is a lack of
open-sourced chat models specifically designed for the Chinese language,
especially in the field of Chinese finance, at the scale of hundreds of
billions. To address this g... | 2023-05-19T21:01:20Z | null | null | null | null | null | null | null | null | null | null |
2,305.12031 | Clinical Camel: An Open Expert-Level Medical Language Model with
Dialogue-Based Knowledge Encoding | ['Augustin Toma', 'Patrick R. Lawler', 'Jimmy Ba', 'Rahul G. Krishnan', 'Barry B. Rubin', 'Bo Wang'] | ['cs.CL', 'cs.AI'] | We present Clinical Camel, an open large language model (LLM) explicitly
tailored for clinical research. Fine-tuned from LLaMA-2 using QLoRA, Clinical
Camel achieves state-of-the-art performance across medical benchmarks among
openly available medical LLMs. Leveraging efficient single-GPU training,
Clinical Camel surpa... | 2023-05-19T23:07:09Z | for model weights, see https://huggingface.co/wanglab/ | null | null | null | null | null | null | null | null | null |
2,305.12129 | Lifting the Curse of Capacity Gap in Distilling Language Models | ['Chen Zhang', 'Yang Yang', 'Jiahao Liu', 'Jingang Wang', 'Yunsen Xian', 'Benyou Wang', 'Dawei Song'] | ['cs.CL', 'cs.LG'] | Pretrained language models (LMs) have shown compelling performance on various
downstream tasks, but unfortunately they require a tremendous amount of
inference compute. Knowledge distillation finds a path to compress LMs to small
ones with a teacher-student paradigm. However, when the capacity gap between
the teacher a... | 2023-05-20T07:30:55Z | 17 pages, 6 figures, 13 tables, accepted to ACL 2023. Code is
available at https://github.com/GeneZC/MiniMoE | null | null | Lifting the Curse of Capacity Gap in Distilling Language Models | ['Chen Zhang', 'Yang Yang', 'Jiahao Liu', 'Jingang Wang', 'Yunsen Xian', 'Benyou Wang', 'Dawei Song'] | 2,023 | Annual Meeting of the Association for Computational Linguistics | 20 | 76 | ['Computer Science'] |
2,305.12182 | Glot500: Scaling Multilingual Corpora and Language Models to 500
Languages | ['Ayyoob Imani', 'Peiqin Lin', 'Amir Hossein Kargaran', 'Silvia Severini', 'Masoud Jalili Sabet', 'Nora Kassner', 'Chunlan Ma', 'Helmut Schmid', 'André F. T. Martins', 'François Yvon', 'Hinrich Schütze'] | ['cs.CL'] | The NLP community has mainly focused on scaling Large Language Models (LLMs)
vertically, i.e., making them better for about 100 languages. We instead scale
LLMs horizontally: we create, through continued pretraining, Glot500-m, an LLM
that covers 511 predominantly low-resource languages. An important part of this
effor... | 2023-05-20T12:26:41Z | ACL 2023 | null | 10.18653/v1/2023.acl-long.61 | Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages | ['Ayyoob Imani', 'Peiqin Lin', 'Amir Hossein Kargaran', 'Silvia Severini', 'Masoud Jalili Sabet', 'Nora Kassner', 'Chunlan Ma', 'Helmut Schmid', 'André F. T. Martins', 'François Yvon', 'Hinrich Schütze'] | 2,023 | Annual Meeting of the Association for Computational Linguistics | 107 | 91 | ['Computer Science'] |
2,305.12301 | Sentence Embedder Guided Utterance Encoder (SEGUE) for Spoken Language
Understanding | ['Yi Xuan Tan', 'Navonil Majumder', 'Soujanya Poria'] | ['cs.CL', 'cs.AI', 'cs.SD', 'eess.AS'] | The pre-trained speech encoder wav2vec 2.0 performs very well on various
spoken language understanding (SLU) tasks. However, on many tasks, it trails
behind text encoders with textual input. To improve the understanding
capability of SLU encoders, various studies have used knowledge distillation to
transfer knowledge f... | 2023-05-20T23:55:55Z | Interspeech 2023 | null | null | null | null | null | null | null | null | null |
2,305.12474 | Evaluating the Performance of Large Language Models on GAOKAO Benchmark | ['Xiaotian Zhang', 'Chunyang Li', 'Yi Zong', 'Zhengyu Ying', 'Liang He', 'Xipeng Qiu'] | ['cs.CL', 'cs.AI'] | Large Language Models(LLMs) have demonstrated remarkable performance across
various natural language processing tasks; however, how to comprehensively and
accurately assess their performance becomes an urgent issue to be addressed.
This paper introduces GAOKAO-Bench, an intuitive benchmark that employs
questions from t... | 2023-05-21T14:39:28Z | null | null | null | Evaluating the Performance of Large Language Models on GAOKAO Benchmark | ['Xiaotian Zhang', 'Chun-yan Li', 'Yi Zong', 'Zhengyu Ying', 'Liang He', 'Xipeng Qiu'] | 2,023 | arXiv.org | 115 | 24 | ['Computer Science'] |
2,305.1253 | Towards Robust Family-Infant Audio Analysis Based on Unsupervised
Pretraining of Wav2vec 2.0 on Large-Scale Unlabeled Family Audio | ['Jialu Li', 'Mark Hasegawa-Johnson', 'Nancy L. McElwain'] | ['eess.AS', 'cs.SD'] | To perform automatic family audio analysis, past studies have collected
recordings using phone, video, or audio-only recording devices like LENA,
investigated supervised learning methods, and used or fine-tuned
general-purpose embeddings learned from large pretrained models. In this study,
we advance the audio componen... | 2023-05-21T18:00:16Z | Proceedings of Interspeech 2023; v4 version updates: correction of
W2V2-base pretrained on 960-hour of LibriSpeech and number of families
participated for LENA home recordings | null | 10.21437/Interspeech.2023-460 | null | null | null | null | null | null | null |
2,305.12567 | Model-Generated Pretraining Signals Improves Zero-Shot Generalization of
Text-to-Text Transformers | ['Linyuan Gong', 'Chenyan Xiong', 'Xiaodong Liu', 'Payal Bajaj', 'Yiqing Xie', 'Alvin Cheung', 'Jianfeng Gao', 'Xia Song'] | ['cs.CL'] | This paper explores the effectiveness of model-generated signals in improving
zero-shot generalization of text-to-text Transformers such as T5. We study
various designs to pretrain T5 using an auxiliary model to construct more
challenging token replacements for the main model to denoise. Key aspects under
study include... | 2023-05-21T21:06:23Z | Published as a conference paper at ACL 2023. 9 pages | null | 10.18653/v1/2023.acl-long.724 | null | null | null | null | null | null | null |
2,305.12599 | Abstract Meaning Representation-Based Logic-Driven Data Augmentation for
Logical Reasoning | ['Qiming Bao', 'Alex Yuxuan Peng', 'Zhenyun Deng', 'Wanjun Zhong', 'Gael Gendron', 'Timothy Pistotti', 'Neset Tan', 'Nathan Young', 'Yang Chen', 'Yonghua Zhu', 'Paul Denny', 'Michael Witbrock', 'Jiamou Liu'] | ['cs.CL', 'cs.AI'] | Combining large language models with logical reasoning enhances their
capacity to address problems in a robust and reliable manner. Nevertheless, the
intricate nature of logical reasoning poses challenges when gathering reliable
data from the web to build comprehensive training datasets, subsequently
affecting performa... | 2023-05-21T23:16:26Z | 21 pages, 8 figures, the Findings of ACL 2024 | null | null | null | null | null | null | null | null | null |
2,305.12708 | ViT-TTS: Visual Text-to-Speech with Scalable Diffusion Transformer | ['Huadai Liu', 'Rongjie Huang', 'Xuan Lin', 'Wenqiang Xu', 'Maozong Zheng', 'Hong Chen', 'Jinzheng He', 'Zhou Zhao'] | ['eess.AS', 'cs.SD'] | Text-to-speech(TTS) has undergone remarkable improvements in performance,
particularly with the advent of Denoising Diffusion Probabilistic Models
(DDPMs). However, the perceived quality of audio depends not solely on its
content, pitch, rhythm, and energy, but also on the physical environment. In
this work, we propose... | 2023-05-22T04:37:41Z | Accepted by EMNLP 2023 | null | null | null | null | null | null | null | null | null |
2,305.1272 | llm-japanese-dataset v0: Construction of Japanese Chat Dataset for Large
Language Models and its Methodology | ['Masanori Hirano', 'Masahiro Suzuki', 'Hiroki Sakaji'] | ['cs.CL', 'cs.AI'] | This study constructed a Japanese chat dataset for tuning large language
models (LLMs), which consist of about 8.4 million records. Recently, LLMs have
been developed and gaining popularity. However, high-performing LLMs are
usually mainly for English. There are two ways to support languages other than
English by those... | 2023-05-22T04:59:33Z | 12 pages | null | null | null | null | null | null | null | null | null |
2,305.1282 | MultiTabQA: Generating Tabular Answers for Multi-Table Question
Answering | ['Vaishali Pal', 'Andrew Yates', 'Evangelos Kanoulas', 'Maarten de Rijke'] | ['cs.CL', 'cs.AI'] | Recent advances in tabular question answering (QA) with large language models
are constrained in their coverage and only answer questions over a single
table. However, real-world queries are complex in nature, often over multiple
tables in a relational database or web page. Single table questions do not
involve common ... | 2023-05-22T08:25:15Z | Accepted at ACL-2023 | In Proceedings of the 61st Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers), 2023, pages 6322-6334,
Toronto, Canada. Association for Computational Linguistics | 10.18653/v1/2023.acl-long.348 | null | null | null | null | null | null | null |
2,305.1287 | Lion: Adversarial Distillation of Proprietary Large Language Models | ['Yuxin Jiang', 'Chunkit Chan', 'Mingyang Chen', 'Wei Wang'] | ['cs.CL'] | The practice of transferring knowledge from a sophisticated, proprietary
large language model (LLM) to a compact, open-source LLM has garnered
considerable attention. Previous works have focused on a unidirectional
knowledge distillation way by aligning the responses of the student model with
those of the teacher model... | 2023-05-22T09:49:16Z | 21 pages, 5 figures, EMNLP 2023 main conference | null | null | null | null | null | null | null | null | null |
2,305.12908 | Language Models for German Text Simplification: Overcoming Parallel Data
Scarcity through Style-specific Pre-training | ['Miriam Anschütz', 'Joshua Oehms', 'Thomas Wimmer', 'Bartłomiej Jezierski', 'Georg Groh'] | ['cs.CL'] | Automatic text simplification systems help to reduce textual information
barriers on the internet. However, for languages other than English, only few
parallel data to train these systems exists. We propose a two-step approach to
overcome this data scarcity issue. First, we fine-tuned language models on a
corpus of Ger... | 2023-05-22T10:41:30Z | Accepted to ACL Findings 2023 | null | 10.18653/v1/2023.findings-acl.74 | null | null | null | null | null | null | null |
2,305.12953 | Enhancing Next Active Object-based Egocentric Action Anticipation with
Guided Attention | ['Sanket Thakur', 'Cigdem Beyan', 'Pietro Morerio', 'Vittorio Murino', 'Alessio Del Bue'] | ['cs.CV'] | Short-term action anticipation (STA) in first-person videos is a challenging
task that involves understanding the next active object interactions and
predicting future actions. Existing action anticipation methods have primarily
focused on utilizing features extracted from video clips, but often overlooked
the importan... | 2023-05-22T11:56:10Z | Accepted to IEEE ICIP 2023, see project page here :
https://sanketsans.github.io/guided-attention-egocentric.html | null | null | Enhancing Next Active Object-Based Egocentric Action Anticipation with Guided Attention | ['Sanket Thakur', 'Cigdem Beyan', 'Pietro Morerio', 'Vittorio Murino', 'A. D. Bue'] | 2,023 | International Conference on Information Photonics | 6 | 24 | ['Computer Science'] |
2,305.13009 | Textually Pretrained Speech Language Models | ['Michael Hassid', 'Tal Remez', 'Tu Anh Nguyen', 'Itai Gat', 'Alexis Conneau', 'Felix Kreuk', 'Jade Copet', 'Alexandre Defossez', 'Gabriel Synnaeve', 'Emmanuel Dupoux', 'Roy Schwartz', 'Yossi Adi'] | ['cs.CL', 'cs.LG', 'cs.SD', 'eess.AS'] | Speech language models (SpeechLMs) process and generate acoustic data only,
without textual supervision. In this work, we propose TWIST, a method for
training SpeechLMs using a warm-start from a pretrained textual language
models. We show using both automatic and human evaluations that TWIST
outperforms a cold-start Sp... | 2023-05-22T13:12:16Z | NeurIPS 2023 | null | null | null | null | null | null | null | null | null |
2,305.13035 | Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design | ['Ibrahim Alabdulmohsin', 'Xiaohua Zhai', 'Alexander Kolesnikov', 'Lucas Beyer'] | ['cs.CV', 'cs.LG', 'I.2.10; I.2.6'] | Scaling laws have been recently employed to derive compute-optimal model size
(number of parameters) for a given compute duration. We advance and refine such
methods to infer compute-optimal model shapes, such as width and depth, and
successfully implement this in vision transformers. Our shape-optimized vision
transfo... | 2023-05-22T13:39:28Z | 10 pages, 7 figures, 9 tables. Version 2: Layout fixes | 37th Conference on Neural Information Processing Systems (NeurIPS
2023) | null | Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design | ['Ibrahim M. Alabdulmohsin', 'Xiaohua Zhai', 'Alexander Kolesnikov', 'Lucas Beyer'] | 2,023 | Neural Information Processing Systems | 64 | 82 | ['Computer Science'] |
2,305.13048 | RWKV: Reinventing RNNs for the Transformer Era | ['Bo Peng', 'Eric Alcaide', 'Quentin Anthony', 'Alon Albalak', 'Samuel Arcadinho', 'Stella Biderman', 'Huanqi Cao', 'Xin Cheng', 'Michael Chung', 'Matteo Grella', 'Kranthi Kiran GV', 'Xuzheng He', 'Haowen Hou', 'Jiaju Lin', 'Przemyslaw Kazienko', 'Jan Kocon', 'Jiaming Kong', 'Bartlomiej Koptyra', 'Hayden Lau', 'Krishna... | ['cs.CL', 'cs.AI'] | Transformers have revolutionized almost all natural language processing (NLP)
tasks but suffer from memory and computational complexity that scales
quadratically with sequence length. In contrast, recurrent neural networks
(RNNs) exhibit linear scaling in memory and computational requirements but
struggle to match the ... | 2023-05-22T13:57:41Z | null | null | null | null | null | null | null | null | null | null |
2,305.13117 | AVeriTeC: A Dataset for Real-world Claim Verification with Evidence from
the Web | ['Michael Schlichtkrull', 'Zhijiang Guo', 'Andreas Vlachos'] | ['cs.CL'] | Existing datasets for automated fact-checking have substantial limitations,
such as relying on artificial claims, lacking annotations for evidence and
intermediate reasoning, or including evidence published after the claim. In
this paper we introduce AVeriTeC, a new dataset of 4,568 real-world claims
covering fact-chec... | 2023-05-22T15:17:18Z | Accepted to NeurIPS 2023 Datasets & Benchmarks Track | null | null | AVeriTeC: A Dataset for Real-world Claim Verification with Evidence from the Web | ['M. Schlichtkrull', 'Zhijiang Guo', 'Andreas Vlachos'] | 2,023 | Neural Information Processing Systems | 76 | 98 | ['Computer Science'] |
2,305.13179 | Teaching Probabilistic Logical Reasoning to Transformers | ['Aliakbar Nafar', 'Kristen Brent Venable', 'Parisa Kordjamshidi'] | ['cs.CL', 'cs.AI', 'I.2.7'] | In this paper, we evaluate the capability of transformer-based language
models in making inferences over uncertain text that includes uncertain rules
of reasoning. We cover both Pre-trained Language Models (PLMs) and generative
Large Language Models (LLMs). Our evaluation results show that both generations
of language ... | 2023-05-22T16:08:20Z | This work is part of the proceedings of EACL Findings 2024 | null | null | Teaching Probabilistic Logical Reasoning to Transformers | ['Aliakbar Nafar', 'K. Venable', 'Parisa Kordjamshidi'] | 2,023 | Findings | 4 | 45 | ['Computer Science'] |
2,305.13194 | SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization
Evaluation | ['Elizabeth Clark', 'Shruti Rijhwani', 'Sebastian Gehrmann', 'Joshua Maynez', 'Roee Aharoni', 'Vitaly Nikolaev', 'Thibault Sellam', 'Aditya Siddhant', 'Dipanjan Das', 'Ankur P. Parikh'] | ['cs.CL'] | Reliable automatic evaluation of summarization systems is challenging due to
the multifaceted and subjective nature of the task. This is especially the case
for languages other than English, where human evaluations are scarce. In this
work, we introduce SEAHORSE, a dataset for multilingual, multifaceted
summarization e... | 2023-05-22T16:25:07Z | null | null | null | null | null | null | null | null | null | null |
2,305.13242 | MAGE: Machine-generated Text Detection in the Wild | ['Yafu Li', 'Qintong Li', 'Leyang Cui', 'Wei Bi', 'Zhilin Wang', 'Longyue Wang', 'Linyi Yang', 'Shuming Shi', 'Yue Zhang'] | ['cs.CL'] | Large language models (LLMs) have achieved human-level text generation,
emphasizing the need for effective AI-generated text detection to mitigate
risks like the spread of fake news and plagiarism. Existing research has been
constrained by evaluating detection methods on specific domains or particular
language models. ... | 2023-05-22T17:13:29Z | ACL 2024 | null | null | null | null | null | null | null | null | null |
2,305.13245 | GQA: Training Generalized Multi-Query Transformer Models from Multi-Head
Checkpoints | ['Joshua Ainslie', 'James Lee-Thorp', 'Michiel de Jong', 'Yury Zemlyanskiy', 'Federico Lebrón', 'Sumit Sanghai'] | ['cs.CL', 'cs.LG'] | Multi-query attention (MQA), which only uses a single key-value head,
drastically speeds up decoder inference. However, MQA can lead to quality
degradation, and moreover it may not be desirable to train a separate model
just for faster inference. We (1) propose a recipe for uptraining existing
multi-head language model... | 2023-05-22T17:16:38Z | Accepted at EMNLP 2023. Added to related work | null | null | GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints | ['J. Ainslie', 'J. Lee-Thorp', 'Michiel de Jong', 'Yury Zemlyanskiy', "Federico Lebr'on", 'Sumit K. Sanghai'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 709 | 31 | ['Computer Science'] |
2,305.13272 | CLASS: A Design Framework for building Intelligent Tutoring Systems
based on Learning Science principles | ['Shashank Sonkar', 'Naiming Liu', 'Debshila Basu Mallick', 'Richard G. Baraniuk'] | ['cs.CL'] | We present a design framework called Conversational Learning with Analytical
Step-by-Step Strategies (CLASS) for building advanced Intelligent Tutoring
Systems (ITS) powered by high-performance Large Language Models (LLMs). The
CLASS framework empowers ITS with two key capabilities. First, through a
carefully curated s... | 2023-05-22T17:35:05Z | Paper accepted at EMNLP 2023 | null | null | null | null | null | null | null | null | null |
2,305.13297 | Investigating the Role of Feed-Forward Networks in Transformers Using
Parallel Attention and Feed-Forward Net Design | ['Shashank Sonkar', 'Richard G. Baraniuk'] | ['cs.CL'] | This paper investigates the key role of Feed-Forward Networks (FFNs) in
transformer models by utilizing the Parallel Attention and Feed-Forward Net
Design (PAF) architecture, and comparing it to their Series Attention and
Feed-Forward Net Design (SAF) counterparts. Central to the effectiveness of PAF
are two main assum... | 2023-05-22T17:56:09Z | null | null | null | Investigating the Role of Feed-Forward Networks in Transformers Using Parallel Attention and Feed-Forward Net Design | ['Shashank Sonkar', 'Richard Baraniuk'] | 2,023 | arXiv.org | 4 | 23 | ['Computer Science'] |
2,305.13301 | Training Diffusion Models with Reinforcement Learning | ['Kevin Black', 'Michael Janner', 'Yilun Du', 'Ilya Kostrikov', 'Sergey Levine'] | ['cs.LG', 'cs.AI', 'cs.CV'] | Diffusion models are a class of flexible generative models trained with an
approximation to the log-likelihood objective. However, most use cases of
diffusion models are not concerned with likelihoods, but instead with
downstream objectives such as human-perceived image quality or drug
effectiveness. In this paper, we ... | 2023-05-22T17:57:41Z | 23 pages, 16 figures | null | null | Training Diffusion Models with Reinforcement Learning | ['Kevin Black', 'Michael Janner', 'Yilun Du', 'Ilya Kostrikov', 'S. Levine'] | 2,023 | International Conference on Learning Representations | 379 | 69 | ['Computer Science'] |
2,305.13303 | Towards Unsupervised Recognition of Token-level Semantic Differences in
Related Documents | ['Jannis Vamvas', 'Rico Sennrich'] | ['cs.CL'] | Automatically highlighting words that cause semantic differences between two
documents could be useful for a wide range of applications. We formulate
recognizing semantic differences (RSD) as a token-level regression task and
study three unsupervised approaches that rely on a masked language model. To
assess the approa... | 2023-05-22T17:58:04Z | EMNLP 2023 | null | null | Towards Unsupervised Recognition of Token-level Semantic Differences in Related Documents | ['Jannis Vamvas', 'Rico Sennrich'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 2 | 25 | ['Computer Science'] |
2,305.13516 | Scaling Speech Technology to 1,000+ Languages | ['Vineel Pratap', 'Andros Tjandra', 'Bowen Shi', 'Paden Tomasello', 'Arun Babu', 'Sayani Kundu', 'Ali Elkahky', 'Zhaoheng Ni', 'Apoorv Vyas', 'Maryam Fazel-Zarandi', 'Alexei Baevski', 'Yossi Adi', 'Xiaohui Zhang', 'Wei-Ning Hsu', 'Alexis Conneau', 'Michael Auli'] | ['cs.CL', 'cs.SD', 'eess.AS'] | Expanding the language coverage of speech technology has the potential to
improve access to information for many more people. However, current speech
technology is restricted to about one hundred languages which is a small
fraction of the over 7,000 languages spoken around the world. The Massively
Multilingual Speech (... | 2023-05-22T22:09:41Z | null | null | null | Scaling Speech Technology to 1, 000+ Languages | ['Vineel Pratap', 'Andros Tjandra', 'Bowen Shi', 'Paden Tomasello', 'Arun Babu', 'Sayani Kundu', 'A. Elkahky', 'Zhaoheng Ni', 'Apoorv Vyas', 'Maryam Fazel-Zarandi', 'Alexei Baevski', 'Yossi Adi', 'Xiaohui Zhang', 'Wei-Ning Hsu', 'Alexis Conneau', 'Michael Auli'] | 2,023 | Journal of machine learning research | 361 | 107 | ['Computer Science', 'Engineering'] |
2,305.13523 | A Study of Generative Large Language Model for Medical Research and
Healthcare | ['Cheng Peng', 'Xi Yang', 'Aokun Chen', 'Kaleb E Smith', 'Nima PourNejatian', 'Anthony B Costa', 'Cheryl Martin', 'Mona G Flores', 'Ying Zhang', 'Tanja Magoc', 'Gloria Lipori', 'Duane A Mitchell', 'Naykky S Ospina', 'Mustafa M Ahmed', 'William R Hogan', 'Elizabeth A Shenkman', 'Yi Guo', 'Jiang Bian', 'Yonghui Wu'] | ['cs.CL'] | There is enormous enthusiasm and concerns in using large language models
(LLMs) in healthcare, yet current assumptions are all based on general-purpose
LLMs such as ChatGPT. This study develops a clinical generative LLM,
GatorTronGPT, using 277 billion words of mixed clinical and English text with a
GPT-3 architecture ... | 2023-05-22T22:37:24Z | null | null | 10.1038/s41746-023-00958-w | null | null | null | null | null | null | null |
2,305.13582 | Translation and Fusion Improves Zero-shot Cross-lingual Information
Extraction | ['Yang Chen', 'Vedaant Shah', 'Alan Ritter'] | ['cs.CL'] | Large language models (LLMs) combined with instruction tuning have shown
significant progress in information extraction (IE) tasks, exhibiting strong
generalization capabilities to unseen datasets by following annotation
guidelines. However, their applicability to low-resource languages remains
limited due to lack of b... | 2023-05-23T01:23:22Z | null | null | null | Translation and Fusion Improves Zero-shot Cross-lingual Information Extraction | ['Yang Chen', 'Vedaant Shah', 'Alan Ritter'] | 2,023 | null | 4 | 67 | ['Computer Science'] |
2,305.13655 | LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image
Diffusion Models with Large Language Models | ['Long Lian', 'Boyi Li', 'Adam Yala', 'Trevor Darrell'] | ['cs.CV'] | Recent advancements in text-to-image diffusion models have yielded impressive
results in generating realistic and diverse images. However, these models still
struggle with complex prompts, such as those that involve numeracy and spatial
reasoning. This work proposes to enhance prompt understanding capabilities in
diffu... | 2023-05-23T03:59:06Z | Transactions on Machine Learning Research (TMLR) 2024, with Featured
Certification | null | null | null | null | null | null | null | null | null |
2,305.13686 | MP-SENet: A Speech Enhancement Model with Parallel Denoising of
Magnitude and Phase Spectra | ['Ye-Xin Lu', 'Yang Ai', 'Zhen-Hua Ling'] | ['eess.AS'] | This paper proposes MP-SENet, a novel Speech Enhancement Network which
directly denoises Magnitude and Phase spectra in parallel. The proposed
MP-SENet adopts a codec architecture in which the encoder and decoder are
bridged by convolution-augmented transformers. The encoder aims to encode
time-frequency representation... | 2023-05-23T04:48:51Z | Accepted by Interspeech 2023 | null | 10.21437/Interspeech.2023-1441 | null | null | null | null | null | null | null |
2,305.13698 | Exploring Large Language Models for Classical Philology | ['Frederick Riemenschneider', 'Anette Frank'] | ['cs.CL', 'I.2.7'] | Recent advances in NLP have led to the creation of powerful language models
for many languages including Ancient Greek and Latin. While prior work on
Classical languages unanimously uses BERT, in this work we create four language
models for Ancient Greek that vary along two dimensions to study their
versatility for tas... | 2023-05-23T05:21:02Z | Paper accepted for publication at ACL 2023 Main; 10 pages, 7 appendix
pages, 4 figures, 13 tables | null | null | null | null | null | null | null | null | null |
2,305.13711 | LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain
Conversations with Large Language Models | ['Yen-Ting Lin', 'Yun-Nung Chen'] | ['cs.CL', 'cs.AI'] | We propose LLM-Eval, a unified multi-dimensional automatic evaluation method
for open-domain conversations with large language models (LLMs). Existing
evaluation methods often rely on human annotations, ground-truth responses, or
multiple LLM prompts, which can be expensive and time-consuming. To address
these issues, ... | 2023-05-23T05:57:09Z | Accepted at 5th NLP4ConvAI | null | null | LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models | ['Yen-Ting Lin', 'Yun-Nung (Vivian) Chen'] | 2,023 | NLP4CONVAI | 94 | 41 | ['Computer Science'] |
2,305.13786 | Perception Test: A Diagnostic Benchmark for Multimodal Video Models | ['Viorica Pătrăucean', 'Lucas Smaira', 'Ankush Gupta', 'Adrià Recasens Continente', 'Larisa Markeeva', 'Dylan Banarse', 'Skanda Koppula', 'Joseph Heyward', 'Mateusz Malinowski', 'Yi Yang', 'Carl Doersch', 'Tatiana Matejovicova', 'Yury Sulsky', 'Antoine Miech', 'Alex Frechette', 'Hanna Klimczak', 'Raphael Koster', 'Junl... | ['cs.CV', 'cs.AI', 'cs.LG'] | We propose a novel multimodal video benchmark - the Perception Test - to
evaluate the perception and reasoning skills of pre-trained multimodal models
(e.g. Flamingo, SeViLA, or GPT-4). Compared to existing benchmarks that focus
on computational tasks (e.g. classification, detection or tracking), the
Perception Test fo... | 2023-05-23T07:54:37Z | 37th Conference on Neural Information Processing Systems (NeurIPS
2023) Track on Datasets and Benchmarks | null | null | Perception Test: A Diagnostic Benchmark for Multimodal Video Models | ['Viorica Puatruaucean', 'Lucas Smaira', 'Ankush Gupta', 'Adrià Recasens Continente', 'L. Markeeva', 'Dylan Banarse', 'Skanda Koppula', 'Joseph Heyward', 'Mateusz Malinowski', 'Yezhou Yang', 'Carl Doersch', 'Tatiana Matejovicova', 'Yury Sulsky', 'Antoine Miech', 'A. Fréchette', 'H. Klimczak', 'R. Koster', 'Junlin Zhang... | 2,023 | Neural Information Processing Systems | 179 | 60 | ['Computer Science'] |
2,305.1382 | An Open Dataset and Model for Language Identification | ['Laurie Burchell', 'Alexandra Birch', 'Nikolay Bogoychev', 'Kenneth Heafield'] | ['cs.CL'] | Language identification (LID) is a fundamental step in many natural language
processing pipelines. However, current LID systems are far from perfect,
particularly on lower-resource languages. We present a LID model which achieves
a macro-average F1 score of 0.93 and a false positive rate of 0.033 across 201
languages, ... | 2023-05-23T08:43:42Z | To be published in ACL 2023 | null | 10.18653/v1/2023.acl-short.75 | An Open Dataset and Model for Language Identification | ['Laurie Burchell', 'Alexandra Birch', 'Nikolay Bogoychev', 'Kenneth Heafield'] | 2,023 | Annual Meeting of the Association for Computational Linguistics | 36 | 53 | ['Computer Science'] |
2,305.1384 | Control-A-Video: Controllable Text-to-Video Diffusion Models with Motion
Prior and Reward Feedback Learning | ['Weifeng Chen', 'Yatai Ji', 'Jie Wu', 'Hefeng Wu', 'Pan Xie', 'Jiashi Li', 'Xin Xia', 'Xuefeng Xiao', 'Liang Lin'] | ['cs.CV', 'cs.AI', 'cs.LG', 'cs.MM'] | Recent advances in text-to-image (T2I) diffusion models have enabled
impressive image generation capabilities guided by text prompts. However,
extending these techniques to video generation remains challenging, with
existing text-to-video (T2V) methods often struggling to produce high-quality
and motion-consistent vide... | 2023-05-23T09:03:19Z | null | null | null | null | null | null | null | null | null | null |
2,305.13873 | Unsafe Diffusion: On the Generation of Unsafe Images and Hateful Memes
From Text-To-Image Models | ['Yiting Qu', 'Xinyue Shen', 'Xinlei He', 'Michael Backes', 'Savvas Zannettou', 'Yang Zhang'] | ['cs.CV', 'cs.CR', 'cs.CY', 'cs.LG', 'cs.SI'] | State-of-the-art Text-to-Image models like Stable Diffusion and DALLE$\cdot$2
are revolutionizing how people generate visual content. At the same time,
society has serious concerns about how adversaries can exploit such models to
generate unsafe images. In this work, we focus on demystifying the generation
of unsafe im... | 2023-05-23T09:48:16Z | To Appear in the ACM Conference on Computer and Communications
Security, November 26, 2023 | null | null | Unsafe Diffusion: On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models | ['Y. Qu', 'Xinyue Shen', 'Xinlei He', 'M. Backes', 'Savvas Zannettou', 'Yang Zhang'] | 2,023 | Conference on Computer and Communications Security | 124 | 67 | ['Computer Science'] |
2,305.13915 | DAPR: A Benchmark on Document-Aware Passage Retrieval | ['Kexin Wang', 'Nils Reimers', 'Iryna Gurevych'] | ['cs.IR', 'cs.CL'] | The work of neural retrieval so far focuses on ranking short texts and is
challenged with long documents. There are many cases where the users want to
find a relevant passage within a long document from a huge corpus, e.g.
Wikipedia articles, research papers, etc. We propose and name this task
\emph{Document-Aware Pass... | 2023-05-23T10:39:57Z | Accepted at ACL 2024 Main Conference | null | null | null | null | null | null | null | null | null |
2,305.14045 | The CoT Collection: Improving Zero-shot and Few-shot Learning of
Language Models via Chain-of-Thought Fine-Tuning | ['Seungone Kim', 'Se June Joo', 'Doyoung Kim', 'Joel Jang', 'Seonghyeon Ye', 'Jamin Shin', 'Minjoon Seo'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Language models (LMs) with less than 100B parameters are known to perform
poorly on chain-of-thought (CoT) reasoning in contrast to large LMs when
solving unseen tasks. In this work, we aim to equip smaller LMs with the
step-by-step reasoning capability by instruction tuning with CoT rationales. In
order to achieve thi... | 2023-05-23T13:14:59Z | EMNLP 2023 (Main Conference) | null | null | null | null | null | null | null | null | null |
2,305.14201 | Goat: Fine-tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks | ['Tiedong Liu', 'Bryan Kian Hsiang Low'] | ['cs.LG', 'cs.AI', 'cs.CL'] | We introduce Goat, a fine-tuned LLaMA model that significantly outperforms
GPT-4 on a range of arithmetic tasks. Fine-tuned on a synthetically generated
dataset, Goat achieves state-of-the-art performance on BIG-bench arithmetic
sub-task. In particular, the zero-shot Goat-7B matches or even surpasses the
accuracy achie... | 2023-05-23T16:20:30Z | null | null | null | null | null | null | null | null | null | null |
2,305.14202 | Fine-tuned LLMs Know More, Hallucinate Less with Few-Shot
Sequence-to-Sequence Semantic Parsing over Wikidata | ['Silei Xu', 'Shicheng Liu', 'Theo Culhane', 'Elizaveta Pertseva', 'Meng-Hsi Wu', 'Sina J. Semnani', 'Monica S. Lam'] | ['cs.CL'] | While large language models (LLMs) can answer many questions correctly, they
can also hallucinate and give wrong answers. Wikidata, with its over 12 billion
facts, can be used to ground LLMs to improve their factuality. This paper
presents WikiWebQuestions, a high-quality question answering benchmark for
Wikidata. Port... | 2023-05-23T16:20:43Z | EMNLP 2023 Main | null | null | null | null | null | null | null | null | null |
2,305.14214 | CompoundPiece: Evaluating and Improving Decompounding Performance of
Language Models | ['Benjamin Minixhofer', 'Jonas Pfeiffer', 'Ivan Vulić'] | ['cs.CL'] | While many languages possess processes of joining two or more words to create
compound words, previous studies have been typically limited only to languages
with excessively productive compound formation (e.g., German, Dutch) and there
is no public dataset containing compound and non-compound words across a large
numbe... | 2023-05-23T16:32:27Z | EMNLP 2023 | null | null | CompoundPiece: Evaluating and Improving Decompounding Performance of Language Models | ['Benjamin Minixhofer', 'Jonas Pfeiffer', 'Ivan Vulic'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 7 | 71 | ['Computer Science'] |
2,305.14232 | Pre-training Multi-task Contrastive Learning Models for Scientific
Literature Understanding | ['Yu Zhang', 'Hao Cheng', 'Zhihong Shen', 'Xiaodong Liu', 'Ye-Yi Wang', 'Jianfeng Gao'] | ['cs.CL', 'cs.DL', 'cs.IR', 'cs.LG'] | Scientific literature understanding tasks have gained significant attention
due to their potential to accelerate scientific discovery. Pre-trained language
models (LMs) have shown effectiveness in these tasks, especially when tuned via
contrastive learning. However, jointly utilizing pre-training data across
multiple h... | 2023-05-23T16:47:22Z | 17 pages; Accepted to Findings of EMNLP 2023 (Project Page:
https://scimult.github.io/) | null | null | null | null | null | null | null | null | null |
2,305.14233 | Enhancing Chat Language Models by Scaling High-quality Instructional
Conversations | ['Ning Ding', 'Yulin Chen', 'Bokai Xu', 'Yujia Qin', 'Zhi Zheng', 'Shengding Hu', 'Zhiyuan Liu', 'Maosong Sun', 'Bowen Zhou'] | ['cs.CL', 'cs.AI'] | Fine-tuning on instruction data has been widely validated as an effective
practice for implementing chat language models like ChatGPT. Scaling the
diversity and quality of such data, although straightforward, stands a great
chance of leading to improved performance. This paper aims to improve the upper
bound of open-so... | 2023-05-23T16:49:14Z | null | null | null | null | null | null | null | null | null | null |
2,305.14251 | FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long
Form Text Generation | ['Sewon Min', 'Kalpesh Krishna', 'Xinxi Lyu', 'Mike Lewis', 'Wen-tau Yih', 'Pang Wei Koh', 'Mohit Iyyer', 'Luke Zettlemoyer', 'Hannaneh Hajishirzi'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Evaluating the factuality of long-form text generated by large language
models (LMs) is non-trivial because (1) generations often contain a mixture of
supported and unsupported pieces of information, making binary judgments of
quality inadequate, and (2) human evaluation is time-consuming and costly. In
this paper, we ... | 2023-05-23T17:06:00Z | 25 pages; 7 figures. Published as a main conference paper at EMNLP
2023. Code available at https://github.com/shmsw25/FActScore | null | null | null | null | null | null | null | null | null |
2,305.14283 | Query Rewriting for Retrieval-Augmented Large Language Models | ['Xinbei Ma', 'Yeyun Gong', 'Pengcheng He', 'Hai Zhao', 'Nan Duan'] | ['cs.CL'] | Large Language Models (LLMs) play powerful, black-box readers in the
retrieve-then-read pipeline, making remarkable progress in knowledge-intensive
tasks. This work introduces a new framework, Rewrite-Retrieve-Read instead of
the previous retrieve-then-read for the retrieval-augmented LLMs from the
perspective of the q... | 2023-05-23T17:27:50Z | EMNLP2023 | null | null | Query Rewriting for Retrieval-Augmented Large Language Models | ['Xinbei Ma', 'Yeyun Gong', 'Pengcheng He', 'Hai Zhao', 'Nan Duan'] | 2,023 | arXiv.org | 115 | 62 | ['Computer Science'] |
2,305.14292 | WikiChat: Stopping the Hallucination of Large Language Model Chatbots by
Few-Shot Grounding on Wikipedia | ['Sina J. Semnani', 'Violet Z. Yao', 'Heidi C. Zhang', 'Monica S. Lam'] | ['cs.CL'] | This paper presents the first few-shot LLM-based chatbot that almost never
hallucinates and has high conversationality and low latency. WikiChat is
grounded on the English Wikipedia, the largest curated free-text corpus.
WikiChat generates a response from an LLM, retains only the grounded facts,
and combines them wit... | 2023-05-23T17:37:36Z | Findings of EMNLP 2023 | null | 10.18653/v1/2023.findings-emnlp.157 | WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia | ['Sina J. Semnani', 'Violet Z. Yao', 'He Zhang', 'M. Lam'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 81 | 94 | ['Computer Science'] |
2,305.14303 | QTSumm: Query-Focused Summarization over Tabular Data | ['Yilun Zhao', 'Zhenting Qi', 'Linyong Nan', 'Boyu Mi', 'Yixin Liu', 'Weijin Zou', 'Simeng Han', 'Ruizhe Chen', 'Xiangru Tang', 'Yumo Xu', 'Dragomir Radev', 'Arman Cohan'] | ['cs.CL'] | People primarily consult tables to conduct data analysis or answer specific
questions. Text generation systems that can provide accurate table summaries
tailored to users' information needs can facilitate more efficient access to
relevant data insights. Motivated by this, we define a new query-focused table
summarizati... | 2023-05-23T17:43:51Z | Accepted at EMNLP 2023 | null | null | QTSumm: Query-Focused Summarization over Tabular Data | ['Yilun Zhao', 'Zhenting Qi', 'Linyong Nan', 'Boyu Mi', 'Yixin Liu', 'Weijin Zou', 'Simeng Han', 'Ruizhe Chen', 'Xiangru Tang', 'Yumo Xu', 'Dragomir R. Radev', 'Arman Cohan'] | 2,023 | Conference on Empirical Methods in Natural Language Processing | 1 | 65 | ['Computer Science'] |
2,305.14314 | QLoRA: Efficient Finetuning of Quantized LLMs | ['Tim Dettmers', 'Artidoro Pagnoni', 'Ari Holtzman', 'Luke Zettlemoyer'] | ['cs.LG'] | We present QLoRA, an efficient finetuning approach that reduces memory usage
enough to finetune a 65B parameter model on a single 48GB GPU while preserving
full 16-bit finetuning task performance. QLoRA backpropagates gradients through
a frozen, 4-bit quantized pretrained language model into Low Rank
Adapters~(LoRA). O... | 2023-05-23T17:50:33Z | Extended NeurIPS submission | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.