arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,405.14906
AutoCoder: Enhancing Code Large Language Model with \textsc{AIEV-Instruct}
['Bin Lei', 'Yuchen Li', 'Qiuwu Chen']
['cs.SE', 'cs.AI']
We introduce AutoCoder, the first Large Language Model to surpass GPT-4 Turbo (April 2024) and GPT-4o in pass@1 on the Human Eval benchmark test ($\mathbf{90.9\%}$ vs. $\mathbf{90.2\%}$). In addition, AutoCoder offers a more versatile code interpreter compared to GPT-4 Turbo and GPT-4o. It's code interpreter can instal...
2024-05-23T02:53:25Z
null
null
null
AutoCoder: Enhancing Code Large Language Model with AIEV-Instruct
['Bin Lei', 'Yuchen Li', 'Qiuwu Chen']
2,024
arXiv.org
7
29
['Computer Science']
2,405.14917
SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models
['Wei Huang', 'Haotong Qin', 'Yangdong Liu', 'Yawei Li', 'Qinshuo Liu', 'Xianglong Liu', 'Luca Benini', 'Michele Magno', 'Shiming Zhang', 'Xiaojuan Qi']
['cs.LG', 'cs.CL']
Post-training quantization (PTQ) is an effective technique for compressing large language models (LLMs). However, while uniform-precision quantization is computationally efficient, it often compromises model performance. To address this, we propose SliM-LLM, a salience-driven mixed-precision quantization framework that...
2024-05-23T16:21:48Z
22 pages
null
null
null
null
null
null
null
null
null
2,405.1493
AstroPT: Scaling Large Observation Models for Astronomy
['Michael J. Smith', 'Ryan J. Roberts', 'Eirini Angeloudi', 'Marc Huertas-Company']
['astro-ph.IM', 'astro-ph.GA', 'cs.LG']
This work presents AstroPT, an autoregressive pretrained transformer developed with astronomical use-cases in mind. The AstroPT models presented here have been pretrained on 8.6 million $512 \times 512$ pixel $grz$-band galaxy postage stamp observations from the DESI Legacy Survey DR8. We train a selection of foundatio...
2024-05-23T18:00:00Z
12 pages, 4 figures, 1 table. Code available at https://github.com/Smith42/astroPT
null
null
null
null
null
null
null
null
null
2,405.14974
LOVA3: Learning to Visual Question Answering, Asking and Assessment
['Henry Hengyuan Zhao', 'Pan Zhou', 'Difei Gao', 'Zechen Bai', 'Mike Zheng Shou']
['cs.CV', 'cs.AI', 'cs.CL']
Question answering, asking, and assessment are three innate human traits crucial for understanding the world and acquiring knowledge. By enhancing these capabilities, humans can more effectively utilize data, leading to better comprehension and learning outcomes. Current Multimodal Large Language Models (MLLMs) primari...
2024-05-23T18:21:59Z
NeurIPS 2024. The code is available at https://github.com/showlab/LOVA3
null
null
LOVA3: Learning to Visual Question Answering, Asking and Assessment
['Henry Hengyuan Zhao', 'Pan Zhou', 'Difei Gao', 'Mike Zheng Shou']
2,024
Neural Information Processing Systems
9
112
['Computer Science']
2,405.14979
CraftsMan3D: High-fidelity Mesh Generation with 3D Native Generation and Interactive Geometry Refiner
['Weiyu Li', 'Jiarui Liu', 'Hongyu Yan', 'Rui Chen', 'Yixun Liang', 'Xuelin Chen', 'Ping Tan', 'Xiaoxiao Long']
['cs.GR', 'cs.CV']
We present a novel generative 3D modeling system, coined CraftsMan, which can generate high-fidelity 3D geometries with highly varied shapes, regular mesh topologies, and detailed surfaces, and, notably, allows for refining the geometry in an interactive manner. Despite the significant advancements in 3D generation, ex...
2024-05-23T18:30:12Z
HomePage: https://craftsman3d.github.io/, Code: https://github.com/wyysf-98/CraftsMan3D
null
null
CraftsMan3D: High-fidelity Mesh Generation with 3D Native Generation and Interactive Geometry Refiner
['Weiyu Li', 'Jiarui Liu', 'Rui Chen', 'Yixun Liang', 'Xuelin Chen', 'Ping Tan', 'Xiaoxiao Long']
2,024
null
59
65
['Computer Science']
2,405.15032
Aya 23: Open Weight Releases to Further Multilingual Progress
['Viraat Aryabumi', 'John Dang', 'Dwarak Talupuru', 'Saurabh Dash', 'David Cairuz', 'Hangyu Lin', 'Bharat Venkitesh', 'Madeline Smith', 'Jon Ander Campos', 'Yi Chern Tan', 'Kelly Marchisio', 'Max Bartolo', 'Sebastian Ruder', 'Acyr Locatelli', 'Julia Kreutzer', 'Nick Frosst', 'Aidan Gomez', 'Phil Blunsom', 'Marzieh Fada...
['cs.CL']
This technical report introduces Aya 23, a family of multilingual language models. Aya 23 builds on the recent release of the Aya model (\"Ust\"un et al., 2024), focusing on pairing a highly performant pre-trained model with the recently released Aya collection (Singh et al., 2024). The result is a powerful multilingua...
2024-05-23T20:10:38Z
null
null
null
null
null
null
null
null
null
null
2,405.15165
A Solution-based LLM API-using Methodology for Academic Information Seeking
['Yuanchun Wang', 'Jifan Yu', 'Zijun Yao', 'Jing Zhang', 'Yuyang Xie', 'Shangqing Tu', 'Yiyang Fu', 'Youhe Feng', 'Jinkai Zhang', 'Jingyao Zhang', 'Bowen Huang', 'Yuanyao Li', 'Huihui Yuan', 'Lei Hou', 'Juanzi Li', 'Jie Tang']
['cs.CL', 'cs.AI', 'cs.SE']
Applying large language models (LLMs) for academic API usage shows promise in reducing researchers' academic information seeking efforts. However, current LLM API-using methods struggle with complex API coupling commonly encountered in academic queries. To address this, we introduce SoAy, a solution-based LLM API-using...
2024-05-24T02:44:14Z
22 pages, 13 figures
null
null
null
null
null
null
null
null
null
2,405.15199
ODGEN: Domain-specific Object Detection Data Generation with Diffusion Models
['Jingyuan Zhu', 'Shiyu Li', 'Yuxuan Liu', 'Ping Huang', 'Jiulong Shan', 'Huimin Ma', 'Jian Yuan']
['cs.CV']
Modern diffusion-based image generative models have made significant progress and become promising to enrich training data for the object detection task. However, the generation quality and the controllability for complex scenes containing multi-class objects and dense objects with occlusions remain limited. This paper...
2024-05-24T04:10:34Z
Accepted by NeurIPS2024
null
null
null
null
null
null
null
null
null
2,405.15223
iVideoGPT: Interactive VideoGPTs are Scalable World Models
['Jialong Wu', 'Shaofeng Yin', 'Ningya Feng', 'Xu He', 'Dong Li', 'Jianye Hao', 'Mingsheng Long']
['cs.CV', 'cs.LG', 'cs.RO']
World models empower model-based agents to interactively explore, reason, and plan within imagined environments for real-world decision-making. However, the high demand for interactivity poses challenges in harnessing recent advancements in video generative models for developing world models at scale. This work introdu...
2024-05-24T05:29:12Z
NeurIPS 2024. Code is available at project website: https://thuml.github.io/iVideoGPT
null
null
null
null
null
null
null
null
null
2,405.15234
Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models
['Yimeng Zhang', 'Xin Chen', 'Jinghan Jia', 'Yihua Zhang', 'Chongyu Fan', 'Jiancheng Liu', 'Mingyi Hong', 'Ke Ding', 'Sijia Liu']
['cs.CV', 'cs.CR']
Diffusion models (DMs) have achieved remarkable success in text-to-image generation, but they also pose safety risks, such as the potential generation of harmful content and copyright violations. The techniques of machine unlearning, also known as concept erasing, have been developed to address these risks. However, th...
2024-05-24T05:47:23Z
Accepted by NeurIPS'24. Codes are available at https://github.com/OPTML-Group/AdvUnlearn
null
null
null
null
null
null
null
null
null
2,405.15306
DeTikZify: Synthesizing Graphics Programs for Scientific Figures and Sketches with TikZ
['Jonas Belouadi', 'Simone Paolo Ponzetto', 'Steffen Eger']
['cs.CL', 'cs.CV']
Creating high-quality scientific figures can be time-consuming and challenging, even though sketching ideas on paper is relatively easy. Furthermore, recreating existing figures that are not stored in formats preserving semantic information is equally complex. To tackle this problem, we introduce DeTikZify, a novel mul...
2024-05-24T07:48:35Z
Accepted at NeurIPS 2024 (spotlight); Project page: https://github.com/potamides/DeTikZify
null
null
null
null
null
null
null
null
null
2,405.15319
Stacking Your Transformers: A Closer Look at Model Growth for Efficient LLM Pre-Training
['Wenyu Du', 'Tongxu Luo', 'Zihan Qiu', 'Zeyu Huang', 'Yikang Shen', 'Reynold Cheng', 'Yike Guo', 'Jie Fu']
['cs.CL', 'cs.AI']
LLMs are computationally expensive to pre-train due to their large scale. Model growth emerges as a promising approach by leveraging smaller models to accelerate the training of larger ones. However, the viability of these model growth methods in efficient LLM pre-training remains underexplored. This work identifies th...
2024-05-24T08:00:00Z
NeurIPS 2024 Spotlight
null
null
null
null
null
null
null
null
null
2,405.15506
Learning to Discretize Denoising Diffusion ODEs
['Vinh Tong', 'Hoang Trung-Dung', 'Anji Liu', 'Guy Van den Broeck', 'Mathias Niepert']
['cs.LG']
Diffusion Probabilistic Models (DPMs) are generative models showing competitive performance in various domains, including image synthesis and 3D point cloud generation. Sampling from pre-trained DPMs involves multiple neural function evaluations (NFEs) to transform Gaussian noise samples into images, resulting in highe...
2024-05-24T12:51:23Z
null
null
null
Learning to Discretize Denoising Diffusion ODEs
['Vinh Tong', 'Anji Liu', 'Trung-Dung Hoang', 'Guy Van den Broeck', 'Mathias Niepert']
2,024
International Conference on Learning Representations
6
50
['Computer Science']
2,405.15574
Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models
['Byung-Kwan Lee', 'Chae Won Kim', 'Beomchan Park', 'Yong Man Ro']
['cs.CV']
The rapid development of large language and vision models (LLVMs) has been driven by advances in visual instruction tuning. Recently, open-source LLVMs have curated high-quality visual instruction tuning datasets and utilized additional vision encoders or multiple computer vision models in order to narrow the performan...
2024-05-24T14:04:03Z
Code is available in https://github.com/ByungKwanLee/Meteor
null
null
null
null
null
null
null
null
null
2,405.15589
Efficient Adversarial Training in LLMs with Continuous Attacks
['Sophie Xhonneux', 'Alessandro Sordoni', 'Stephan Günnemann', 'Gauthier Gidel', 'Leo Schwinn']
['cs.LG', 'cs.CR']
Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails. In many domains, adversarial training has proven to be one of the most promising methods to reliably improve robustness against such attacks. Yet, in the context of LLMs, current methods for adversarial training ...
2024-05-24T14:20:09Z
19 pages, 4 figures
null
null
Efficient Adversarial Training in LLMs with Continuous Attacks
['Sophie Xhonneux', 'Alessandro Sordoni', 'Stephan Günnemann', 'G. Gidel', 'Leo Schwinn']
2,024
Neural Information Processing Systems
56
46
['Computer Science']
2,405.1564
GECKO: Generative Language Model for English, Code and Korean
['Sungwoo Oh', 'Donggyu Kim']
['cs.CL', 'cs.AI']
We introduce GECKO, a bilingual large language model (LLM) optimized for Korean and English, along with programming languages. GECKO is pretrained on the balanced, high-quality corpus of Korean and English employing LLaMA architecture. In this report, we share the experiences of several efforts to build a better data p...
2024-05-24T15:30:41Z
null
null
null
GECKO: Generative Language Model for English, Code and Korean
['Sungwoo Oh', 'Donggyu Kim']
2,024
arXiv.org
0
49
['Computer Science']
2,405.15662
Class Machine Unlearning for Complex Data via Concepts Inference and Data Poisoning
['Wenhan Chang', 'Tianqing Zhu', 'Heng Xu', 'Wenjian Liu', 'Wanlei Zhou']
['cs.LG']
In current AI era, users may request AI companies to delete their data from the training dataset due to the privacy concerns. As a model owner, retraining a model will consume significant computational resources. Therefore, machine unlearning is a new emerged technology to allow model owner to delete requested training...
2024-05-24T15:59:17Z
null
null
null
null
null
null
null
null
null
null
2,405.15734
LM4LV: A Frozen Large Language Model for Low-level Vision Tasks
['Boyang Zheng', 'Jinjin Gu', 'Shijun Li', 'Chao Dong']
['cs.CV']
The success of large language models (LLMs) has fostered a new research trend of multi-modality large language models (MLLMs), which changes the paradigm of various fields in computer vision. Though MLLMs have shown promising results in numerous high-level vision and vision-language tasks such as VQA and text-to-image,...
2024-05-24T17:25:00Z
null
null
null
null
null
null
null
null
null
null
2,405.15738
ConvLLaVA: Hierarchical Backbones as Visual Encoder for Large Multimodal Models
['Chunjiang Ge', 'Sijie Cheng', 'Ziming Wang', 'Jiale Yuan', 'Yuan Gao', 'Jun Song', 'Shiji Song', 'Gao Huang', 'Bo Zheng']
['cs.CV']
High-resolution Large Multimodal Models (LMMs) encounter the challenges of excessive visual tokens and quadratic visual complexity. Current high-resolution LMMs address the quadratic complexity while still generating excessive visual tokens. However, the redundancy in visual tokens is the key problem as it leads to mor...
2024-05-24T17:34:15Z
17 pages
null
null
null
null
null
null
null
null
null
2,405.15863
Quality-aware Masked Diffusion Transformer for Enhanced Music Generation
['Chang Li', 'Ruoyu Wang', 'Lijuan Liu', 'Jun Du', 'Yixuan Sun', 'Zilu Guo', 'Zhenrong Zhang', 'Yuan Jiang', 'Jianqing Gao', 'Feng Ma']
['cs.SD', 'cs.AI', 'eess.AS']
Text-to-music (TTM) generation, which converts textual descriptions into audio, opens up innovative avenues for multimedia creation. Achieving high quality and diversity in this process demands extensive, high-quality data, which are often scarce in available datasets. Most open-source datasets frequently suffer from i...
2024-05-24T18:09:27Z
IJCAI
null
null
null
null
null
null
null
null
null
2,405.15953
Activator: GLU Activation Function as the Core Component of a Vision Transformer
['Abdullah Nazhat Abdullah', 'Tarkan Aydin']
['cs.CV']
Transformer architecture currently represents the main driver behind many successes in a variety of tasks addressed by deep learning, especially the recent advances in natural language processing (NLP) culminating with large language models (LLM). In addition, transformer architecture has found a wide spread of interes...
2024-05-24T21:46:52Z
arXiv admin note: substantial text overlap with arXiv:2403.02411
null
null
Activator: GLU Activation Function as the Core Component of a Vision Transformer
['Abdullah Nazhat Abdullah', 'Tarkan Aydin']
2,024
null
0
54
['Computer Science']
2,405.16153
DefSent+: Improving sentence embeddings of language models by projecting definition sentences into a quasi-isotropic or isotropic vector space of unlimited dictionary entries
['Xiaodong Liu']
['cs.CL', 'cs.AI', 'cs.LG']
This paper presents a significant improvement on the previous conference paper known as DefSent. The prior study seeks to improve sentence embeddings of language models by projecting definition sentences into the vector space of dictionary entries. We discover that this approach is not fully explored due to the methodo...
2024-05-25T09:43:38Z
null
null
null
null
null
null
null
null
null
null
2,405.16406
SpinQuant: LLM quantization with learned rotations
['Zechun Liu', 'Changsheng Zhao', 'Igor Fedorov', 'Bilge Soran', 'Dhruv Choudhary', 'Raghuraman Krishnamoorthi', 'Vikas Chandra', 'Yuandong Tian', 'Tijmen Blankevoort']
['cs.LG', 'cs.AI', 'cs.CL', 'cs.CV']
Post-training quantization (PTQ) techniques applied to weights, activations, and the KV cache greatly reduce memory usage, latency, and power consumption of Large Language Models (LLMs), but may lead to large quantization errors when outliers are present. Rotating activation or weight matrices helps remove outliers and...
2024-05-26T02:15:49Z
ICLR 2025
null
null
null
null
null
null
null
null
null
2,405.16433
CPsyCoun: A Report-based Multi-turn Dialogue Reconstruction and Evaluation Framework for Chinese Psychological Counseling
['Chenhao Zhang', 'Renhao Li', 'Minghuan Tan', 'Min Yang', 'Jingwei Zhu', 'Di Yang', 'Jiahao Zhao', 'Guancheng Ye', 'Chengming Li', 'Xiping Hu']
['cs.CL', 'cs.AI', 'cs.CY']
Using large language models (LLMs) to assist psychological counseling is a significant but challenging task at present. Attempts have been made on improving empathetic conversations or acting as effective assistants in the treatment with LLMs. However, the existing datasets lack consulting knowledge, resulting in LLMs ...
2024-05-26T05:18:00Z
Appectped to Findings of ACL2024
null
null
CPsyCoun: A Report-based Multi-turn Dialogue Reconstruction and Evaluation Framework for Chinese Psychological Counseling
['Chenhao Zhang', 'Renhao Li', 'Minghuan Tan', 'Min Yang', 'Jingwei Zhu', 'Di Yang', 'Jiahao Zhao', 'Guancheng Ye', 'Chengming Li', 'Xiping Hu', 'Derek F. Wong']
2,024
Annual Meeting of the Association for Computational Linguistics
29
32
['Computer Science']
2,405.16436
Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer
['Zhihan Liu', 'Miao Lu', 'Shenao Zhang', 'Boyi Liu', 'Hongyi Guo', 'Yingxiang Yang', 'Jose Blanchet', 'Zhaoran Wang']
['cs.LG', 'cs.AI', 'stat.ML']
Aligning generative models with human preference via RLHF typically suffers from overoptimization, where an imperfectly learned reward model can misguide the generative model to output undesired responses. We investigate this problem in a principled manner by identifying the source of the misalignment as a form of dist...
2024-05-26T05:38:50Z
Accepted by The Thirty-Eighth Annual Conference on Neural Information Processing Systems. 31 pages, 7 figures
null
null
null
null
null
null
null
null
null
2,405.16579
Automatically Generating Numerous Context-Driven SFT Data for LLMs across Diverse Granularity
['Shanghaoran Quan']
['cs.CL']
Constructing high-quality query-response pairs from custom corpus is crucial for supervised fine-tuning (SFT) large language models (LLMs) in many applications, like creating domain-specific AI assistants or roleplaying agents. However, sourcing this data through human annotation is costly, and existing automated metho...
2024-05-26T14:14:18Z
null
null
null
null
null
null
null
null
null
null
2,405.16635
Compressing Lengthy Context With UltraGist
['Peitian Zhang', 'Zheng Liu', 'Shitao Xiao', 'Ninglu Shao', 'Qiwei Ye', 'Zhicheng Dou']
['cs.CL']
Compressing lengthy context is a critical but technically challenging problem. In this paper, we propose a new method called UltraGist, which is distinguished for its high-quality compression of lengthy context due to the innovative design of the compression and learning algorithm. UltraGist brings forth the following ...
2024-05-26T17:23:56Z
Superceded by arXiv:2401.03462v3
null
null
Compressing Lengthy Context With UltraGist
['Peitian Zhang', 'Zheng Liu', 'Shitao Xiao', 'Ninglu Shao', 'Qiwei Ye', 'Zhicheng Dou']
2,024
arXiv.org
4
32
['Computer Science']
2,405.16646
A Provably Effective Method for Pruning Experts in Fine-tuned Sparse Mixture-of-Experts
['Mohammed Nowaz Rabbani Chowdhury', 'Meng Wang', 'Kaoutar El Maghraoui', 'Naigang Wang', 'Pin-Yu Chen', 'Christopher Carothers']
['cs.LG']
The sparsely gated mixture of experts (MoE) architecture sends different inputs to different subnetworks, i.e., experts, through trainable routers. MoE reduces the training computation significantly for large models, but its deployment can be still memory or computation expensive for some downstream tasks. Model prunin...
2024-05-26T17:52:58Z
null
The 41st International Conference on Machine Learning, ICML 2024
null
null
null
null
null
null
null
null
2,405.16681
Triple Preference Optimization: Achieving Better Alignment using a Single Step Optimization
['Amir Saeidi', 'Shivanshu Verma', 'Aswin RRV', 'Kashif Rasul', 'Chitta Baral']
['cs.CL']
Reinforcement Learning with Human Feedback (RLHF) enhances the alignment of Large Language Models (LLMs). However, its limitations have led to the development of Direct Preference Optimization (DPO), an RL-free approach designed to overcome these shortcomings. While studies have shown that DPO improves instruction-foll...
2024-05-26T20:18:11Z
null
null
null
null
null
null
null
null
null
null
2,405.167
Implicit Multimodal Alignment: On the Generalization of Frozen LLMs to Multimodal Inputs
['Mustafa Shukor', 'Matthieu Cord']
['cs.CV', 'cs.CL', 'cs.LG']
Large Language Models (LLMs) have demonstrated impressive performance on multimodal tasks, without any multimodal finetuning. They are the building block for Large Multimodal Models, yet, we still lack a proper understanding of their success. In this work, we expose frozen LLMs to image, video, audio and text inputs an...
2024-05-26T21:31:59Z
NeurIPS 2024. Code: https://github.com/mshukor/ima-lmms. Project page: https://ima-lmms.github.io/
null
null
null
null
null
null
null
null
null
2,405.16712
Zamba: A Compact 7B SSM Hybrid Model
['Paolo Glorioso', 'Quentin Anthony', 'Yury Tokpanov', 'James Whittington', 'Jonathan Pilault', 'Adam Ibrahim', 'Beren Millidge']
['cs.LG', 'cs.AI', 'cs.CL']
In this technical report, we present Zamba, a novel 7B SSM-transformer hybrid model which achieves competitive performance against leading open-weight models at a comparable scale. Zamba is trained on 1T tokens from openly available datasets and is the best non-transformer model at this scale. Zamba pioneers a unique a...
2024-05-26T22:23:02Z
null
null
null
null
null
null
null
null
null
null
2,405.16727
Disentangling and Integrating Relational and Sensory Information in Transformer Architectures
['Awni Altabaa', 'John Lafferty']
['cs.LG']
Relational reasoning is a central component of generally intelligent systems, enabling robust and data-efficient inductive generalization. Recent empirical evidence shows that many existing neural architectures, including Transformers, struggle with tasks requiring relational reasoning. In this work, we distinguish bet...
2024-05-26T23:52:51Z
ICML 2025
null
null
null
null
null
null
null
null
null
2,405.16785
PromptFix: You Prompt and We Fix the Photo
['Yongsheng Yu', 'Ziyun Zeng', 'Hang Hua', 'Jianlong Fu', 'Jiebo Luo']
['cs.CV']
Diffusion models equipped with language models demonstrate excellent controllability in image generation tasks, allowing image processing to adhere to human instructions. However, the lack of diverse instruction-following data hampers the development of models that effectively recognize and execute user-customized inst...
2024-05-27T03:13:28Z
Accepted to NeurIPS 2024
null
null
null
null
null
null
null
null
null
2,405.16886
Hawk: Learning to Understand Open-World Video Anomalies
['Jiaqi Tang', 'Hao Lu', 'Ruizheng Wu', 'Xiaogang Xu', 'Ke Ma', 'Cheng Fang', 'Bin Guo', 'Jiangbo Lu', 'Qifeng Chen', 'Ying-Cong Chen']
['cs.CV']
Video Anomaly Detection (VAD) systems can autonomously monitor and identify disturbances, reducing the need for manual labor and associated costs. However, current VAD systems are often limited by their superficial semantic understanding of scenes and minimal user interaction. Additionally, the prevalent data scarcity ...
2024-05-27T07:08:58Z
null
null
null
null
null
null
null
null
null
null
2,405.17057
ReflectionCoder: Learning from Reflection Sequence for Enhanced One-off Code Generation
['Houxing Ren', 'Mingjie Zhan', 'Zhongyuan Wu', 'Aojun Zhou', 'Junting Pan', 'Hongsheng Li']
['cs.CL', 'cs.AI']
Code generation plays a crucial role in various tasks, such as code auto-completion and mathematical reasoning. Previous work has proposed numerous methods to enhance code generation performance, including integrating feedback from the compiler. Inspired by this, we present ReflectionCoder, a novel approach that effect...
2024-05-27T11:27:00Z
Accepted to ACL 2025 (main conference)
null
null
null
null
null
null
null
null
null
2,405.17093
DeeperImpact: Optimizing Sparse Learned Index Structures
['Soyuj Basnet', 'Jerry Gou', 'Antonio Mallia', 'Torsten Suel']
['cs.IR']
A lot of recent work has focused on sparse learned indexes that use deep neural architectures to significantly improve retrieval quality while keeping the efficiency benefits of the inverted index. While such sparse learned structures achieve effectiveness far beyond those of traditional inverted index-based rankers, t...
2024-05-27T12:08:59Z
null
null
null
null
null
null
null
null
null
null
2,405.17103
Empowering Character-level Text Infilling by Eliminating Sub-Tokens
['Houxing Ren', 'Mingjie Zhan', 'Zhongyuan Wu', 'Hongsheng Li']
['cs.CL', 'cs.AI']
In infilling tasks, sub-tokens, representing instances where a complete token is segmented into two parts, often emerge at the boundaries of prefixes, middles, and suffixes. Traditional methods focused on training models at the token level, leading to sub-optimal performance in character-level infilling tasks during th...
2024-05-27T12:21:48Z
Accepted to ACL 2024 (main conference)
null
null
Empowering Character-level Text Infilling by Eliminating Sub-Tokens
['Houxing Ren', 'Mingjie Zhan', 'Zhongyuan Wu', 'Hongsheng Li']
2,024
Annual Meeting of the Association for Computational Linguistics
1
45
['Computer Science']
2,405.17176
DreamMat: High-quality PBR Material Generation with Geometry- and Light-aware Diffusion Models
['Yuqing Zhang', 'Yuan Liu', 'Zhiyu Xie', 'Lei Yang', 'Zhongyuan Liu', 'Mengzhou Yang', 'Runze Zhang', 'Qilong Kou', 'Cheng Lin', 'Wenping Wang', 'Xiaogang Jin']
['cs.GR', 'cs.AI']
2D diffusion model, which often contains unwanted baked-in shading effects and results in unrealistic rendering effects in the downstream applications. Generating Physically Based Rendering (PBR) materials instead of just RGB textures would be a promising solution. However, directly distilling the PBR material paramete...
2024-05-27T13:55:08Z
Accepted to SIGGRAPH 2024
null
null
null
null
null
null
null
null
null
2,405.1722
RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness
['Tianyu Yu', 'Haoye Zhang', 'Qiming Li', 'Qixin Xu', 'Yuan Yao', 'Da Chen', 'Xiaoman Lu', 'Ganqu Cui', 'Yunkai Dang', 'Taiwen He', 'Xiaocheng Feng', 'Jun Song', 'Bo Zheng', 'Zhiyuan Liu', 'Tat-Seng Chua', 'Maosong Sun']
['cs.CL']
Traditional feedback learning for hallucination reduction relies on labor-intensive manual labeling or expensive proprietary models. This leaves the community without foundational knowledge about how to build high-quality feedback with open-source MLLMs. In this work, we introduce RLAIF-V, a novel framework that aligns...
2024-05-27T14:37:01Z
Project Website: https://github.com/RLHF-V/RLAIF-V
null
null
RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness
['Tianyu Yu', 'Haoye Zhang', 'Yuan Yao', 'Yunkai Dang', 'Dawn Chen', 'Xiaoman Lu', 'Ganqu Cui', 'Taiwen He', 'Zhiyuan Liu', 'Tat-Seng Chua', 'Maosong Sun']
2,024
null
35
75
['Computer Science']
2,405.17251
GenWarp: Single Image to Novel Views with Semantic-Preserving Generative Warping
['Junyoung Seo', 'Kazumi Fukuda', 'Takashi Shibuya', 'Takuya Narihira', 'Naoki Murata', 'Shoukang Hu', 'Chieh-Hsin Lai', 'Seungryong Kim', 'Yuki Mitsufuji']
['cs.CV']
Generating novel views from a single image remains a challenging task due to the complexity of 3D scenes and the limited diversity in the existing multi-view datasets to train a model on. Recent research combining large-scale text-to-image (T2I) models with monocular depth estimation (MDE) has shown promise in handling...
2024-05-27T15:07:04Z
Accepted to NeurIPS 2024 / Project page: https://GenWarp-NVS.github.io
null
null
null
null
null
null
null
null
null
2,405.17382
ReMoDetect: Reward Models Recognize Aligned LLM's Generations
['Hyunseok Lee', 'Jihoon Tack', 'Jinwoo Shin']
['cs.LG', 'cs.CL']
The remarkable capabilities and easy accessibility of large language models (LLMs) have significantly increased societal risks (e.g., fake news generation), necessitating the development of LLM-generated text (LGT) detection methods for safe usage. However, detecting LGTs is challenging due to the vast number of LLMs, ...
2024-05-27T17:38:33Z
Published as a conference proceeding for NeurIPS 2024
null
null
null
null
null
null
null
null
null
2,405.17398
Vista: A Generalizable Driving World Model with High Fidelity and Versatile Controllability
['Shenyuan Gao', 'Jiazhi Yang', 'Li Chen', 'Kashyap Chitta', 'Yihang Qiu', 'Andreas Geiger', 'Jun Zhang', 'Hongyang Li']
['cs.CV', 'cs.AI']
World models can foresee the outcomes of different actions, which is of paramount importance for autonomous driving. Nevertheless, existing driving world models still have limitations in generalization to unseen environments, prediction fidelity of critical details, and action controllability for flexible application. ...
2024-05-27T17:49:15Z
NeurIPS 2024. Code and model: https://github.com/OpenDriveLab/Vista, demo page: https://vista-demo.github.io
null
null
null
null
null
null
null
null
null
2,405.17399
Transformers Can Do Arithmetic with the Right Embeddings
['Sean McLeish', 'Arpit Bansal', 'Alex Stein', 'Neel Jain', 'John Kirchenbauer', 'Brian R. Bartoldson', 'Bhavya Kailkhura', 'Abhinav Bhatele', 'Jonas Geiping', 'Avi Schwarzschild', 'Tom Goldstein']
['cs.LG', 'cs.AI']
The poor performance of transformers on arithmetic tasks seems to stem in large part from their inability to keep track of the exact position of each digit inside of a large span of digits. We mend this problem by adding an embedding to each digit that encodes its position relative to the start of the number. In additi...
2024-05-27T17:49:18Z
null
null
null
Transformers Can Do Arithmetic with the Right Embeddings
['Sean McLeish', 'Arpit Bansal', 'Alex Stein', 'Neel Jain', 'John Kirchenbauer', 'Brian R. Bartoldson', 'B. Kailkhura', 'A. Bhatele', 'Jonas Geiping', 'Avi Schwarzschild', 'Tom Goldstein']
2,024
Neural Information Processing Systems
37
48
['Computer Science']
2,405.17428
NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models
['Chankyu Lee', 'Rajarshi Roy', 'Mengyao Xu', 'Jonathan Raiman', 'Mohammad Shoeybi', 'Bryan Catanzaro', 'Wei Ping']
['cs.CL', 'cs.AI', 'cs.IR', 'cs.LG']
Decoder-only LLM-based embedding models are beginning to outperform BERT or T5-based embedding models in general-purpose text embedding tasks, including dense vector-based retrieval. In this work, we introduce NV-Embed, incorporating architectural designs, training procedures, and curated datasets to significantly enha...
2024-05-27T17:59:45Z
ICLR 2025 (Spotlight). We open-source the model at: https://huggingface.co/nvidia/NV-Embed-v2
null
null
null
null
null
null
null
null
null
2,405.1743
Matryoshka Multimodal Models
['Mu Cai', 'Jianwei Yang', 'Jianfeng Gao', 'Yong Jae Lee']
['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG']
Large Multimodal Models (LMMs) such as LLaVA have shown strong performance in visual-linguistic reasoning. These models first embed images into a fixed large number of visual tokens and then feed them into a Large Language Model (LLM). However, this design causes an excessive number of tokens for dense visual scenarios...
2024-05-27T17:59:56Z
Project Page: https://matryoshka-mm.github.io/
null
null
null
null
null
null
null
null
null
2,405.17455
WeatherFormer: A Pretrained Encoder Model for Learning Robust Weather Representations from Small Datasets
['Adib Hasan', 'Mardavij Roozbehani', 'Munther Dahleh']
['cs.CV', 'cs.AI', 'cs.LG', 'physics.ao-ph', 'stat.ML']
This paper introduces WeatherFormer, a transformer encoder-based model designed to learn robust weather features from minimal observations. It addresses the challenge of modeling complex weather dynamics from small datasets, a bottleneck for many prediction tasks in agriculture, epidemiology, and climate science. Weath...
2024-05-22T17:43:46Z
null
null
null
null
null
null
null
null
null
null
2,405.17537
CLIBD: Bridging Vision and Genomics for Biodiversity Monitoring at Scale
['ZeMing Gong', 'Austin T. Wang', 'Xiaoliang Huo', 'Joakim Bruslund Haurum', 'Scott C. Lowe', 'Graham W. Taylor', 'Angel X. Chang']
['cs.AI', 'cs.CL', 'cs.CV']
Measuring biodiversity is crucial for understanding ecosystem health. While prior works have developed machine learning models for taxonomic classification of photographic images and DNA separately, in this work, we introduce a multimodal approach combining both, using CLIP-style contrastive learning to align images, b...
2024-05-27T17:57:48Z
31 pages with 14 figures
null
null
null
null
null
null
null
null
null
2,405.17743
ORLM: A Customizable Framework in Training Large Models for Automated Optimization Modeling
['Chenyu Huang', 'Zhengyang Tang', 'Shixi Hu', 'Ruoqing Jiang', 'Xin Zheng', 'Dongdong Ge', 'Benyou Wang', 'Zizhuo Wang']
['cs.CL', 'cs.AI', 'cs.CE', 'cs.LG']
Optimization modeling plays a critical role in the application of Operations Research (OR) tools to address real-world problems, yet they pose challenges and require extensive expertise from OR experts. With the advent of large language models (LLMs), new opportunities have emerged to streamline and automate such task....
2024-05-28T01:55:35Z
accepted by Operations Research
null
null
ORLM: A Customizable Framework in Training Large Models for Automated Optimization Modeling
['Chenyu Huang', 'Zhengyang Tang', 'Shixi Hu', 'Ruoqing Jiang', 'Xin Zheng', 'Dongdong Ge', 'Benyou Wang', 'Zizhuo Wang']
2,024
null
6
0
['Computer Science']
2,405.17767
Linguistic Collapse: Neural Collapse in (Large) Language Models
['Robert Wu', 'Vardan Papyan']
['cs.LG', 'cs.CL', 'stat.ML', '68T07 (Primary) 68T50 (Secondary)', 'I.2.6; I.2.7']
Neural collapse ($\mathcal{NC}$) is a phenomenon observed in classification tasks where top-layer representations collapse into their class means, which become equinorm, equiangular and aligned with the classifiers. These behaviours -- associated with generalization and robustness -- would manifest under specific condi...
2024-05-28T02:46:11Z
NeurIPS 2024; 35 pages; 30 figures; reverted to log mean norms for NC2
null
null
Linguistic Collapse: Neural Collapse in (Large) Language Models
['Robert Wu', 'Vardan Papyan']
2,024
Neural Information Processing Systems
16
123
['Computer Science', 'Mathematics']
2,405.17829
LDMol: A Text-to-Molecule Diffusion Model with Structurally Informative Latent Space Surpasses AR Models
['Jinho Chang', 'Jong Chul Ye']
['cs.LG', 'cs.AI']
With the emergence of diffusion models as a frontline generative model, many researchers have proposed molecule generation techniques with conditional diffusion models. However, the unavoidable discreteness of a molecule makes it difficult for a diffusion model to connect raw data with highly complex conditions like na...
2024-05-28T04:59:13Z
Poster in ICML 2025; 19 pages, 13 figures
null
null
null
null
null
null
null
null
null
2,405.17842
MMDisCo: Multi-Modal Discriminator-Guided Cooperative Diffusion for Joint Audio and Video Generation
['Akio Hayakawa', 'Masato Ishii', 'Takashi Shibuya', 'Yuki Mitsufuji']
['cs.CV', 'cs.LG', 'cs.MM', 'cs.SD', 'eess.AS']
This study aims to construct an audio-video generative model with minimal computational cost by leveraging pre-trained single-modal generative models for audio and video. To achieve this, we propose a novel method that guides single-modal models to cooperatively generate well-aligned samples across modalities. Specific...
2024-05-28T05:43:03Z
ICLR 2025
null
null
null
null
null
null
null
null
null
2,405.17846
Safety Control of Service Robots with LLMs and Embodied Knowledge Graphs
['Yong Qi', 'Gabriel Kyebambo', 'Siyuan Xie', 'Wei Shen', 'Shenghui Wang', 'Bitao Xie', 'Bin He', 'Zhipeng Wang', 'Shuo Jiang']
['cs.RO', 'cs.AI']
Safety limitations in service robotics across various industries have raised significant concerns about the need for robust mechanisms ensuring that robots adhere to safe practices, thereby preventing actions that might harm humans or cause property damage. Despite advances, including the integration of Knowledge Graph...
2024-05-28T05:50:25Z
null
null
null
Safety Control of Service Robots with LLMs and Embodied Knowledge Graphs
['Yong Qi', 'Gabriel Kyebambo', 'Siyuan Xie', 'Wei Shen', 'Shenghui Wang', 'Bitao Xie', 'Bin He', 'Zhipeng Wang', 'Shuo Jiang']
2,024
arXiv.org
2
55
['Computer Science']
2,405.17873
MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization
['Tianchen Zhao', 'Xuefei Ning', 'Tongcheng Fang', 'Enshu Liu', 'Guyue Huang', 'Zinan Lin', 'Shengen Yan', 'Guohao Dai', 'Yu Wang']
['cs.CV', 'cs.AI']
Diffusion models have achieved significant visual generation quality. However, their significant computational and memory costs pose challenge for their application on resource-constrained mobile devices or even desktop GPUs. Recent few-step diffusion models reduces the inference time by reducing the denoising steps. H...
2024-05-28T06:50:58Z
Project Page: https://a-suozhang.xyz/mixdq.github.io/
null
null
null
null
null
null
null
null
null
2,405.17933
ToonCrafter: Generative Cartoon Interpolation
['Jinbo Xing', 'Hanyuan Liu', 'Menghan Xia', 'Yong Zhang', 'Xintao Wang', 'Ying Shan', 'Tien-Tsin Wong']
['cs.CV']
We introduce ToonCrafter, a novel approach that transcends traditional correspondence-based cartoon video interpolation, paving the way for generative interpolation. Traditional methods, that implicitly assume linear motion and the absence of complicated phenomena like dis-occlusion, often struggle with the exaggerated...
2024-05-28T07:58:33Z
Project page: https://doubiiu.github.io/projects/ToonCrafter/
null
null
null
null
null
null
null
null
null
2,405.17976
Yuan 2.0-M32: Mixture of Experts with Attention Router
['Shaohua Wu', 'Jiangang Luo', 'Xi Chen', 'Lingjun Li', 'Xudong Zhao', 'Tong Yu', 'Chao Wang', 'Yue Wang', 'Fei Wang', 'Weixu Qiao', 'Houbo He', 'Zeru Zhang', 'Zeyu Sun', 'Junxiong Mao', 'Chong Shen']
['cs.AI', 'cs.CL']
Yuan 2.0-M32, with a similar base architecture as Yuan-2.0 2B, uses a mixture-of-experts architecture with 32 experts of which 2 experts are active. A new router network, Attention Router, is proposed and adopted for a more efficient selection of experts, which improves the accuracy compared to the model with classical...
2024-05-28T09:05:08Z
14 pages,3 figures, 7 tables
null
null
Yuan 2.0-M32: Mixture of Experts with Attention Router
['Shaohua Wu', 'Jiangang Luo', 'Xi Chen', 'Lingjun Li', 'Xudong Zhao', 'Tong Yu', 'Chao Wang', 'Yue Wang', 'Fei Wang', 'Weixu Qiao', 'Houbo He', 'Zeru Zhang', 'Zeyu Sun', 'Junxiong Mao', 'Chong Shen']
2,024
arXiv.org
11
2
['Computer Science']
2,405.17977
Aligning to Thousands of Preferences via System Message Generalization
['Seongyun Lee', 'Sue Hyun Park', 'Seungone Kim', 'Minjoon Seo']
['cs.CL']
Although humans inherently have diverse values, current large language model (LLM) alignment methods often assume that aligning LLMs with the general public's preferences is optimal. A major challenge in adopting a more individualized approach to LLM alignment is its lack of scalability, as it involves repeatedly acqui...
2024-05-28T09:06:18Z
Accepted to NeurIPS 2024
null
null
Aligning to Thousands of Preferences via System Message Generalization
['Seongyun Lee', 'Sue Hyun Park', 'Seungone Kim', 'Minjoon Seo']
2,024
Neural Information Processing Systems
49
90
['Computer Science']
2,405.18357
Faithful Logical Reasoning via Symbolic Chain-of-Thought
['Jundong Xu', 'Hao Fei', 'Liangming Pan', 'Qian Liu', 'Mong-Li Lee', 'Wynne Hsu']
['cs.CL']
While the recent Chain-of-Thought (CoT) technique enhances the reasoning ability of large language models (LLMs) with the theory of mind, it might still struggle in handling logical reasoning that relies much on symbolic expressions and rigid deducing rules. To strengthen the logical reasoning capability of LLMs, we pr...
2024-05-28T16:55:33Z
Accepted by ACL 2024 (main proceeding)
null
null
null
null
null
null
null
null
null
2,405.18369
PromptWizard: Task-Aware Prompt Optimization Framework
['Eshaan Agarwal', 'Joykirat Singh', 'Vivek Dani', 'Raghav Magazine', 'Tanuja Ganu', 'Akshay Nambi']
['cs.CL', 'cs.AI', 'cs.LG']
Large language models (LLMs) have transformed AI across diverse domains, with prompting being central to their success in guiding model outputs. However, manual prompt engineering is both labor-intensive and domain-specific, necessitating the need for automated solutions. We introduce PromptWizard, a novel, fully autom...
2024-05-28T17:08:31Z
null
null
null
null
null
null
null
null
null
null
2,405.18392
Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations
['Alexander Hägele', 'Elie Bakouch', 'Atli Kosson', 'Loubna Ben Allal', 'Leandro Von Werra', 'Martin Jaggi']
['cs.LG']
Scale has become a main ingredient in obtaining strong machine learning models. As a result, understanding a model's scaling properties is key to effectively designing both the right training setup as well as future generations of architectures. In this work, we argue that scale and training research has been needlessl...
2024-05-28T17:33:54Z
Spotlight at NeurIPS 2024
null
null
null
null
null
null
null
null
null
2,405.18407
Phased Consistency Models
['Fu-Yun Wang', 'Zhaoyang Huang', 'Alexander William Bergman', 'Dazhong Shen', 'Peng Gao', 'Michael Lingelbach', 'Keqiang Sun', 'Weikang Bian', 'Guanglu Song', 'Yu Liu', 'Xiaogang Wang', 'Hongsheng Li']
['cs.LG', 'cs.CV']
Consistency Models (CMs) have made significant progress in accelerating the generation of diffusion models. However, their application to high-resolution, text-conditioned image generation in the latent space remains unsatisfactory. In this paper, we identify three key flaws in the current design of Latent Consistency ...
2024-05-28T17:47:19Z
NeurIPS 2024
null
null
Phased Consistency Models
['Fu-Yun Wang', 'Zhaoyang Huang', 'Alexander William Bergman', 'Dazhong Shen', 'Peng Gao', 'Michael Lingelbach', 'Keqiang Sun', 'Weikang Bian', 'Guanglu Song', 'Yu Liu', 'Hongsheng Li', 'Xiaogang Wang']
2,024
Neural Information Processing Systems
12
0
['Computer Science']
2,405.18416
3D StreetUnveiler with Semantic-aware 2DGS -- a simple baseline
['Jingwei Xu', 'Yikai Wang', 'Yiqun Zhao', 'Yanwei Fu', 'Shenghua Gao']
['cs.CV']
Unveiling an empty street from crowded observations captured by in-car cameras is crucial for autonomous driving. However, removing all temporarily static objects, such as stopped vehicles and standing pedestrians, presents a significant challenge. Unlike object-centric 3D inpainting, which relies on thorough observati...
2024-05-28T17:57:12Z
Project page: https://streetunveiler.github.io
null
null
null
null
null
null
null
null
null
2,405.18425
ViG: Linear-complexity Visual Sequence Learning with Gated Linear Attention
['Bencheng Liao', 'Xinggang Wang', 'Lianghui Zhu', 'Qian Zhang', 'Chang Huang']
['cs.CV', 'cs.AI']
Recently, linear complexity sequence modeling networks have achieved modeling capabilities similar to Vision Transformers on a variety of computer vision tasks, while using fewer FLOPs and less memory. However, their advantage in terms of actual runtime speed is not significant. To address this issue, we introduce Gate...
2024-05-28T17:59:21Z
Work in progress. Code is available at \url{https://github.com/hustvl/ViG}
null
null
ViG: Linear-complexity Visual Sequence Learning with Gated Linear Attention
['Bencheng Liao', 'Xinggang Wang', 'Lianghui Zhu', 'Qian Zhang', 'Chang Huang']
2,024
AAAI Conference on Artificial Intelligence
4
116
['Computer Science']
2,405.18503
SoundCTM: Unifying Score-based and Consistency Models for Full-band Text-to-Sound Generation
['Koichi Saito', 'Dongjun Kim', 'Takashi Shibuya', 'Chieh-Hsin Lai', 'Zhi Zhong', 'Yuhta Takida', 'Yuki Mitsufuji']
['cs.SD', 'cs.LG', 'eess.AS']
Sound content creation, essential for multimedia works such as video games and films, often involves extensive trial-and-error, enabling creators to semantically reflect their artistic ideas and inspirations, which evolve throughout the creation process, into the sound. Recent high-quality diffusion-based Text-to-Sound...
2024-05-28T18:14:52Z
Audio samples: https://anonymus-soundctm.github.io/soundctm_iclr/. Codes: https://github.com/sony/soundctm. Checkpoints: https://huggingface.co/Sony/soundctm
null
null
SoundCTM: Unifying Score-based and Consistency Models for Full-band Text-to-Sound Generation
['Koichi Saito', 'Dongjun Kim', 'Takashi Shibuya', 'Chieh-Hsin Lai', 'Zhi-Wei Zhong', 'Yuhta Takida', 'Yuki Mitsufuji']
2,024
International Conference on Learning Representations
4
65
['Computer Science', 'Engineering']
2,405.18585
Transfer Learning for Emulating Ocean Climate Variability across $CO_2$ forcing
['Surya Dheeshjith', 'Adam Subel', 'Shubham Gupta', 'Alistair Adcroft', 'Carlos Fernandez-Granda', 'Julius Busecke', 'Laure Zanna']
['physics.ao-ph']
With the success of machine learning (ML) applied to climate reaching further every day, emulators have begun to show promise not only for weather but for multi-year time scales in the atmosphere. Similar work for the ocean remains nascent, with state-of-the-art limited to models running for shorter time scales or only...
2024-05-28T21:05:21Z
null
null
null
Transfer Learning for Emulating Ocean Climate Variability across $CO_2$ forcing
['Surya Dheeshjith', 'Adam Subel', 'Shubham Gupta', 'Alistair Adcroft', 'C. Fernandez‐Granda', 'Julius Busecke', 'Laure Zanna']
2,024
null
3
25
['Physics']
2,405.18654
Mitigating Object Hallucination in MLLMs via Data-augmented Phrase-level Alignment
['Pritam Sarkar', 'Sayna Ebrahimi', 'Ali Etemad', 'Ahmad Beirami', 'Sercan Ö. Arık', 'Tomas Pfister']
['cs.CV']
Despite their significant advancements, Multimodal Large Language Models (MLLMs) often generate factually inaccurate information, referred to as hallucination. In this work, we address object hallucinations in MLLMs, where information is generated about an object not present in the input image. We introduce Data-augmen...
2024-05-28T23:36:00Z
Published in ICLR 2025
null
null
null
null
null
null
null
null
null
2,405.18749
A SARS-CoV-2 Interaction Dataset and VHH Sequence Corpus for Antibody Language Models
['Hirofumi Tsuruta', 'Hiroyuki Yamazaki', 'Ryota Maeda', 'Ryotaro Tamura', 'Akihiro Imura']
['cs.LG', 'q-bio.GN']
Antibodies are crucial proteins produced by the immune system to eliminate harmful foreign substances and have become pivotal therapeutic agents for treating human diseases. To accelerate the discovery of antibody therapeutics, there is growing interest in constructing language models using antibody sequences. However,...
2024-05-29T04:22:18Z
null
null
null
null
null
null
null
null
null
null
2,405.18952
Are You Sure? Rank Them Again: Repeated Ranking For Better Preference Datasets
['Peter Devine']
['cs.CL', 'cs.AI', 'cs.LG']
Training Large Language Models (LLMs) with Reinforcement Learning from AI Feedback (RLAIF) aligns model outputs more closely with human preferences. This involves an evaluator model ranking multiple candidate responses to user prompts. However, the rankings from popular evaluator models such as GPT-4 can be inconsisten...
2024-05-29T10:08:31Z
null
null
null
null
null
null
null
null
null
null
2,405.18991
EasyAnimate: A High-Performance Long Video Generation Method based on Transformer Architecture
['Jiaqi Xu', 'Xinyi Zou', 'Kunzhe Huang', 'Yunkuo Chen', 'Bo Liu', 'MengLi Cheng', 'Xing Shi', 'Jun Huang']
['cs.CV', 'cs.CL', 'cs.MM']
This paper presents EasyAnimate, an advanced method for video generation that leverages the power of transformer architecture for high-performance outcomes. We have expanded the DiT framework originally designed for 2D image synthesis to accommodate the complexities of 3D video generation by incorporating a motion modu...
2024-05-29T11:11:07Z
8 pages, 6 figures
null
null
null
null
null
null
null
null
null
2,405.19076
Cephalo: Multi-Modal Vision-Language Models for Bio-Inspired Materials Analysis and Design
['Markus J. Buehler']
['cs.CV', 'cond-mat.mes-hall', 'cond-mat.mtrl-sci', 'cs.CL', 'cs.LG']
We present Cephalo, a series of multimodal vision large language models (V-LLMs) designed for materials science applications, integrating visual and linguistic data for enhanced understanding. A key innovation of Cephalo is its advanced dataset generation method. Cephalo is trained on integrated image and text data fro...
2024-05-29T13:34:32Z
null
null
null
null
null
null
null
null
null
null
2,405.19101
Poseidon: Efficient Foundation Models for PDEs
['Maximilian Herde', 'Bogdan Raonić', 'Tobias Rohner', 'Roger Käppeli', 'Roberto Molinaro', 'Emmanuel de Bézenac', 'Siddhartha Mishra']
['cs.LG']
We introduce Poseidon, a foundation model for learning the solution operators of PDEs. It is based on a multiscale operator transformer, with time-conditioned layer norms that enable continuous-in-time evaluations. A novel training strategy leveraging the semi-group property of time-dependent PDEs to allow for signific...
2024-05-29T14:06:51Z
null
null
null
null
null
null
null
null
null
null
2,405.19265
AlchemistCoder: Harmonizing and Eliciting Code Capability by Hindsight Tuning on Multi-source Data
['Zifan Song', 'Yudong Wang', 'Wenwei Zhang', 'Kuikun Liu', 'Chengqi Lyu', 'Demin Song', 'Qipeng Guo', 'Hang Yan', 'Dahua Lin', 'Kai Chen', 'Cairong Zhao']
['cs.CL']
Open-source Large Language Models (LLMs) and their specialized variants, particularly Code LLMs, have recently delivered impressive performance. However, previous Code LLMs are typically fine-tuned on single-source data with limited quality and diversity, which may insufficiently elicit the potential of pre-trained Cod...
2024-05-29T16:57:33Z
Preprint with 20 pages and 20 figures. Source code and models at https://github.com/InternLM/AlchemistCoder
null
null
null
null
null
null
null
null
null
2,405.19298
Adaptive Image Quality Assessment via Teaching Large Multimodal Model to Compare
['Hanwei Zhu', 'Haoning Wu', 'Yixuan Li', 'Zicheng Zhang', 'Baoliang Chen', 'Lingyu Zhu', 'Yuming Fang', 'Guangtao Zhai', 'Weisi Lin', 'Shiqi Wang']
['cs.CV', 'eess.IV']
While recent advancements in large multimodal models (LMMs) have significantly improved their abilities in image quality assessment (IQA) relying on absolute quality rating, how to transfer reliable relative quality comparison outputs to continuous perceptual quality scores remains largely unexplored. To address this g...
2024-05-29T17:26:09Z
null
null
null
Adaptive Image Quality Assessment via Teaching Large Multimodal Model to Compare
['Hanwei Zhu', 'Haoning Wu', 'Yixuan Li', 'Zicheng Zhang', 'Baoliang Chen', 'Lingyu Zhu', 'Yuming Fang', 'Guangtao Zhai', 'Weisi Lin', 'Shiqi Wang']
2,024
Neural Information Processing Systems
23
76
['Computer Science', 'Engineering']
2,405.19315
Matryoshka Query Transformer for Large Vision-Language Models
['Wenbo Hu', 'Zi-Yi Dou', 'Liunian Harold Li', 'Amita Kamath', 'Nanyun Peng', 'Kai-Wei Chang']
['cs.CV', 'cs.CL', 'cs.LG']
Large Vision-Language Models (LVLMs) typically encode an image into a fixed number of visual tokens (e.g., 576) and process these tokens with a language model. Despite their strong performance, LVLMs face challenges in adapting to varying computational constraints. This raises the question: can we achieve flexibility i...
2024-05-29T17:39:42Z
Preprint. Our code and model are publicly available at https://github.com/gordonhu608/MQT-LLaVA
null
null
Matryoshka Query Transformer for Large Vision-Language Models
['Wenbo Hu', 'Zi-Yi Dou', 'Liunian Harold Li', 'Amita Kamath', 'Nanyun Peng', 'Kai-Wei Chang']
2,024
Neural Information Processing Systems
10
44
['Computer Science']
2,405.19332
Self-Exploring Language Models: Active Preference Elicitation for Online Alignment
['Shenao Zhang', 'Donghan Yu', 'Hiteshi Sharma', 'Han Zhong', 'Zhihan Liu', 'Ziyi Yang', 'Shuohang Wang', 'Hany Hassan', 'Zhaoran Wang']
['cs.LG', 'cs.AI']
Preference optimization, particularly through Reinforcement Learning from Human Feedback (RLHF), has achieved significant success in aligning Large Language Models (LLMs) to adhere to human intentions. Unlike offline alignment with a fixed dataset, online feedback collection from humans or AI on model generations typic...
2024-05-29T17:59:07Z
null
null
null
null
null
null
null
null
null
null
2,405.1936
ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users
['Guanlin Li', 'Kangjie Chen', 'Shudong Zhang', 'Jie Zhang', 'Tianwei Zhang']
['cs.CR', 'cs.AI']
Large-scale pre-trained generative models are taking the world by storm, due to their abilities in generating creative content. Meanwhile, safeguards for these generative models are developed, to protect users' rights and safety, most of which are designed for large language models. Existing methods primarily focus on ...
2024-05-24T07:44:27Z
Accepted by NeurIPS 2024
null
null
null
null
null
null
null
null
null
2,405.19495
Qiskit Code Assistant: Training LLMs for generating Quantum Computing Code
['Nicolas Dupuis', 'Luca Buratti', 'Sanjay Vishwakarma', 'Aitana Viudes Forrat', 'David Kremer', 'Ismael Faro', 'Ruchir Puri', 'Juan Cruz-Benito']
['quant-ph', 'cs.AI']
Code Large Language Models (Code LLMs) have emerged as powerful tools, revolutionizing the software development landscape by automating the coding process and reducing time and effort required to build applications. This paper focuses on training Code LLMs to specialize in the field of quantum computing. We begin by di...
2024-05-29T20:21:00Z
null
null
null
Qiskit Code Assistant: Training LLMs for generating Quantum Computing Code
['Nicolas Dupuis', 'Luca Buratti', 'Sanjay Vishwakarma', 'Aitana Viudes Forrat', 'David Kremer', 'Ismael Faro', 'Ruchir Puri', 'Juan Cruz-Benito']
2,024
2024 IEEE LLM Aided Design Workshop (LAD)
11
39
['Computer Science', 'Physics']
2,405.19538
CheXpert Plus: Augmenting a Large Chest X-ray Dataset with Text Radiology Reports, Patient Demographics and Additional Image Formats
['Pierre Chambon', 'Jean-Benoit Delbrouck', 'Thomas Sounack', 'Shih-Cheng Huang', 'Zhihong Chen', 'Maya Varma', 'Steven QH Truong', 'Chu The Chuong', 'Curtis P. Langlotz']
['cs.CL', 'cs.AI', 'cs.CV', 'cs.LG']
Since the release of the original CheXpert paper five years ago, CheXpert has become one of the most widely used and cited clinical AI datasets. The emergence of vision language models has sparked an increase in demands for sharing reports linked to CheXpert images, along with a growing interest among AI fairness resea...
2024-05-29T21:48:56Z
13 pages Updated title
null
null
CheXpert Plus: Augmenting a Large Chest X-ray Dataset with Text Radiology Reports, Patient Demographics and Additional Image Formats
['Pierre J. Chambon', 'Jean-Benoit Delbrouck', 'Thomas Sounack', 'Shih-Cheng Huang', 'Zhihong Chen', 'Maya Varma', 'Steven Q. H. Truong', 'Chu The Chuong', 'Curtis P. Langlotz']
2,024
arXiv.org
16
30
['Computer Science']
2,405.1967
One Token Can Help! Learning Scalable and Pluggable Virtual Tokens for Retrieval-Augmented Large Language Models
['Yutao Zhu', 'Zhaoheng Huang', 'Zhicheng Dou', 'Ji-Rong Wen']
['cs.CL']
Retrieval-augmented generation (RAG) is a promising way to improve large language models (LLMs) for generating more factual, accurate, and up-to-date content. Existing methods either optimize prompts to guide LLMs in leveraging retrieved information or directly fine-tune LLMs to adapt to RAG scenarios. Although fine-tu...
2024-05-30T03:44:54Z
Accepted by AAAI 2025, repo: https://github.com/DaoD/SPRING/
null
null
One Token Can Help! Learning Scalable and Pluggable Virtual Tokens for Retrieval-Augmented Large Language Models
['Yutao Zhu', 'Zhaoheng Huang', 'Zhicheng Dou', 'Ji-Rong Wen']
2,024
AAAI Conference on Artificial Intelligence
6
65
['Computer Science']
2,405.19715
SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths
['Kaixuan Huang', 'Xudong Guo', 'Mengdi Wang']
['cs.CL', 'cs.AI', 'cs.LG']
Speculative decoding reduces the inference latency of a target large language model via utilizing a smaller and faster draft model. Its performance depends on a hyperparameter K -- the candidate length, i.e., the number of candidate tokens for the target model to verify in each round. However, previous methods often us...
2024-05-30T05:49:38Z
Accepted to COLM 2025
null
null
null
null
null
null
null
null
null
2,405.19783
Instruction-Guided Visual Masking
['Jinliang Zheng', 'Jianxiong Li', 'Sijie Cheng', 'Yinan Zheng', 'Jiaming Li', 'Jihao Liu', 'Yu Liu', 'Jingjing Liu', 'Xianyuan Zhan']
['cs.CV', 'cs.AI', 'cs.LG', 'cs.RO']
Instruction following is crucial in contemporary LLM. However, when extended to multimodal setting, it often suffers from misalignment between specific textual instruction and targeted local region of an image. To achieve more accurate and nuanced multimodal instruction following, we introduce Instruction-guided Visual...
2024-05-30T07:48:32Z
NeurIPS 2024
null
null
null
null
null
null
null
null
null
2,405.19941
Synthetic Patients: Simulating Difficult Conversations with Multimodal Generative AI for Medical Education
['Simon N. Chu', 'Alex J. Goodell']
['cs.HC', 'cs.CY']
Problem: Effective patient-centered communication is a core competency for physicians. However, both seasoned providers and medical trainees report decreased confidence in leading conversations on sensitive topics such as goals of care or end-of-life discussions. The significant administrative burden and the resources ...
2024-05-30T11:02:08Z
null
null
null
null
null
null
null
null
null
null
2,405.20053
Would I Lie To You? Inference Time Alignment of Language Models using Direct Preference Heads
['Avelina Asada Hadji-Kyriacou', 'Ognjen Arandjelovic']
['cs.CL', 'cs.AI', 'cs.LG']
Pre-trained Language Models (LMs) exhibit strong zero-shot and in-context learning capabilities; however, their behaviors are often difficult to control. By utilizing Reinforcement Learning from Human Feedback (RLHF), it is possible to fine-tune unsupervised LMs to follow instructions and produce outputs that reflect h...
2024-05-30T13:38:52Z
null
null
null
null
null
null
null
null
null
null
2,405.20079
Student Answer Forecasting: Transformer-Driven Answer Choice Prediction for Language Learning
['Elena Grazia Gado', 'Tommaso Martorella', 'Luca Zunino', 'Paola Mejia-Domenzain', 'Vinitra Swamy', 'Jibril Frej', 'Tanja Käser']
['cs.CL', 'cs.CY', 'cs.LG']
Intelligent Tutoring Systems (ITS) enhance personalized learning by predicting student answers to provide immediate and customized instruction. However, recent research has primarily focused on the correctness of the answer rather than the student's performance on specific answer choices, limiting insights into student...
2024-05-30T14:09:43Z
Accepted as a poster paper at EDM 2024: 17th International Conference on Educational Data Mining in Atlanta, USA
null
null
null
null
null
null
null
null
null
2,405.20145
Heidelberg-Boston @ SIGTYP 2024 Shared Task: Enhancing Low-Resource Language Analysis With Character-Aware Hierarchical Transformers
['Frederick Riemenschneider', 'Kevin Krahn']
['cs.CL', 'I.2.7']
Historical languages present unique challenges to the NLP community, with one prominent hurdle being the limited resources available in their closed corpora. This work describes our submission to the constrained subtask of the SIGTYP 2024 shared task, focusing on PoS tagging, morphological tagging, and lemmatization fo...
2024-05-30T15:23:34Z
Accepted for publication at the 6th Workshop on Research in Computational Linguistic Typology and Multilingual NLP (SIGTYP-WS) 2024; 11 pages, 1 figure, 9 tables
null
null
null
null
null
null
null
null
null
2,405.20204
Jina CLIP: Your CLIP Model Is Also Your Text Retriever
['Andreas Koukounas', 'Georgios Mastrapas', 'Michael Günther', 'Bo Wang', 'Scott Martens', 'Isabelle Mohr', 'Saba Sturua', 'Mohammad Kalim Akram', 'Joan Fontanals Martínez', 'Saahil Ognawala', 'Susana Guzman', 'Maximilian Werk', 'Nan Wang', 'Han Xiao']
['cs.CL', 'cs.AI', 'cs.CV', 'cs.IR', '68T50', 'I.2.7']
Contrastive Language-Image Pretraining (CLIP) is widely used to train models to align images and texts in a common embedding space by mapping them to fixed-sized vectors. These models are key to multimodal information retrieval and related tasks. However, CLIP models generally underperform in text-only tasks compared t...
2024-05-30T16:07:54Z
4 pages, MFM-EAI@ICML2024
null
null
Jina CLIP: Your CLIP Model Is Also Your Text Retriever
['Andreas Koukounas', 'Georgios Mastrapas', 'Michael Günther', 'Bo Wang', 'Scott Martens', 'Isabelle Mohr', 'Saba Sturua', 'Mohammad Kalim Akram', "Joan Fontanals Mart'inez", 'Saahil Ognawala', 'Susana Guzman', 'Maximilian Werk', 'Nan Wang', 'Han Xiao']
2,024
arXiv.org
18
36
['Computer Science']
2,405.20215
TS-Align: A Teacher-Student Collaborative Framework for Scalable Iterative Finetuning of Large Language Models
['Chen Zhang', 'Chengguang Tang', 'Dading Chong', 'Ke Shi', 'Guohua Tang', 'Feng Jiang', 'Haizhou Li']
['cs.CL']
Mainstream approaches to aligning large language models (LLMs) heavily rely on human preference data, particularly when models require periodic updates. The standard process for iterative alignment of LLMs involves collecting new human feedback for each update. However, the data collection process is costly and challen...
2024-05-30T16:17:40Z
EMNLP-2024 Findings
null
null
TS-Align: A Teacher-Student Collaborative Framework for Scalable Iterative Finetuning of Large Language Models
['Chen Zhang', 'Chengguang Tang', 'Dading Chong', 'Ke Shi', 'Guohua Tang', 'Feng Jiang', 'Haizhou Li']
2,024
Conference on Empirical Methods in Natural Language Processing
4
76
['Computer Science']
2,405.20222
MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model
['Muyao Niu', 'Xiaodong Cun', 'Xintao Wang', 'Yong Zhang', 'Ying Shan', 'Yinqiang Zheng']
['cs.CV', 'cs.AI']
We present MOFA-Video, an advanced controllable image animation method that generates video from the given image using various additional controllable signals (such as human landmarks reference, manual trajectories, and another even provided video) or their combinations. This is different from previous methods which on...
2024-05-30T16:22:22Z
ECCV 2024 ; Project Page: https://myniuuu.github.io/MOFA_Video/ ; Codes: https://github.com/MyNiuuu/MOFA-Video
null
null
null
null
null
null
null
null
null
2,405.20233
Grokfast: Accelerated Grokking by Amplifying Slow Gradients
['Jaerin Lee', 'Bong Gyun Kang', 'Kihoon Kim', 'Kyoung Mu Lee']
['cs.LG', 'cs.AI']
One puzzling artifact in machine learning dubbed grokking is where delayed generalization is achieved tenfolds of iterations after near perfect overfitting to the training data. Focusing on the long delay itself on behalf of machine learning practitioners, our goal is to accelerate generalization of a model under grokk...
2024-05-30T16:35:30Z
17 pages, 13 figures. Typo fixed. Project page: https://jaerinlee.com/research/grokfast
null
null
Grokfast: Accelerated Grokking by Amplifying Slow Gradients
['Jaerin Lee', 'Bong Gyun Kang', 'Kihoon Kim', 'Kyoung Mu Lee']
2,024
arXiv.org
13
28
['Computer Science']
2,405.20315
ANAH: Analytical Annotation of Hallucinations in Large Language Models
['Ziwei Ji', 'Yuzhe Gu', 'Wenwei Zhang', 'Chengqi Lyu', 'Dahua Lin', 'Kai Chen']
['cs.CL', 'cs.AI']
Reducing the `$\textit{hallucination}$' problem of Large Language Models (LLMs) is crucial for their wide applications. A comprehensive and fine-grained measurement of the hallucination is the first key step for the governance of this issue but is under-explored in the community. Thus, we present $\textbf{ANAH}$, a bil...
2024-05-30T17:54:40Z
Accepted by ACL 2024
null
null
ANAH: Analytical Annotation of Hallucinations in Large Language Models
['Ziwei Ji', 'Yuzhe Gu', 'Wenwei Zhang', 'Chengqi Lyu', 'Dahua Lin', 'Kai Chen']
2,024
Annual Meeting of the Association for Computational Linguistics
3
61
['Computer Science']
2,405.20324
Don't drop your samples! Coherence-aware training benefits Conditional diffusion
['Nicolas Dufour', 'Victor Besnier', 'Vicky Kalogeiton', 'David Picard']
['cs.CV', 'cs.LG']
Conditional diffusion models are powerful generative models that can leverage various types of conditional information, such as class labels, segmentation masks, or text captions. However, in many real-world scenarios, conditional information may be noisy or unreliable due to human annotation errors or weak alignment. ...
2024-05-30T17:57:26Z
Accepted at CVPR 2024 as a Highlight. Project page: https://nicolas-dufour.github.io/cad.html
null
null
null
null
null
null
null
null
null
2,405.20335
Xwin-LM: Strong and Scalable Alignment Practice for LLMs
['Bolin Ni', 'JingCheng Hu', 'Yixuan Wei', 'Houwen Peng', 'Zheng Zhang', 'Gaofeng Meng', 'Han Hu']
['cs.CL']
In this work, we present Xwin-LM, a comprehensive suite of alignment methodologies for large language models (LLMs). This suite encompasses several key techniques, including supervised finetuning (SFT), reward modeling (RM), rejection sampling finetuning (RS), and direct preference optimization (DPO). The key component...
2024-05-30T17:59:31Z
null
null
null
null
null
null
null
null
null
null
2,405.2034
MotionLLM: Understanding Human Behaviors from Human Motions and Videos
['Ling-Hao Chen', 'Shunlin Lu', 'Ailing Zeng', 'Hao Zhang', 'Benyou Wang', 'Ruimao Zhang', 'Lei Zhang']
['cs.CV']
This study delves into the realm of multi-modality (i.e., video and motion modalities) human behavior understanding by leveraging the powerful capabilities of Large Language Models (LLMs). Diverging from recent LLMs designed for video-only or motion-only understanding, we argue that understanding human behavior necessi...
2024-05-30T17:59:50Z
MotionLLM version 1.0, project page see https://lhchen.top/MotionLLM
null
null
MotionLLM: Understanding Human Behaviors from Human Motions and Videos
['Ling-Hao Chen', 'Shunlin Lu', 'Ailing Zeng', 'Hao Zhang', 'Benyou Wang', 'Ruimao Zhang', 'Lei Zhang']
2,024
arXiv.org
38
91
['Computer Science']
2,405.20343
Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image
['Kailu Wu', 'Fangfu Liu', 'Zhihan Cai', 'Runjie Yan', 'Hanyang Wang', 'Yating Hu', 'Yueqi Duan', 'Kaisheng Ma']
['cs.CV', 'cs.GR', 'cs.LG', 'I.2.10']
In this work, we introduce Unique3D, a novel image-to-3D framework for efficiently generating high-quality 3D meshes from single-view images, featuring state-of-the-art generation fidelity and strong generalizability. Previous methods based on Score Distillation Sampling (SDS) can produce diversified 3D results by dist...
2024-05-30T17:59:54Z
Project page: https://wukailu.github.io/Unique3D
null
null
null
null
null
null
null
null
null
2,405.20462
Multi-Label Guided Soft Contrastive Learning for Efficient Earth Observation Pretraining
['Yi Wang', 'Conrad M Albrecht', 'Xiao Xiang Zhu']
['cs.CV']
Self-supervised pretraining on large-scale satellite data has raised great interest in building Earth observation (EO) foundation models. However, many important resources beyond pure satellite imagery, such as land-cover-land-use products that provide free global semantic information, as well as vision foundation mode...
2024-05-30T20:19:42Z
Accepted by IEEE Transactions on Geoscience and Remote Sensing. 16 pages, 10 figures
null
null
null
null
null
null
null
null
null
2,405.20494
Slight Corruption in Pre-training Data Makes Better Diffusion Models
['Hao Chen', 'Yujin Han', 'Diganta Misra', 'Xiang Li', 'Kai Hu', 'Difan Zou', 'Masashi Sugiyama', 'Jindong Wang', 'Bhiksha Raj']
['cs.CV', 'cs.AI', 'cs.LG']
Diffusion models (DMs) have shown remarkable capabilities in generating realistic high-quality images, audios, and videos. They benefit significantly from extensive pre-training on large-scale datasets, including web-crawled data with paired data and conditions, such as image-text and image-class pairs. Despite rigorou...
2024-05-30T21:35:48Z
NeurIPS 2024 Spotlight
null
null
null
null
null
null
null
null
null
2,405.20541
Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference Models
['Zachary Ankner', 'Cody Blakeney', 'Kartik Sreenivasan', 'Max Marion', 'Matthew L. Leavitt', 'Mansheej Paul']
['cs.LG', 'cs.CL']
In this work, we investigate whether small language models can determine high-quality subsets of large-scale text datasets that improve the performance of larger language models. While existing work has shown that pruning based on the perplexity of a larger model can yield high-quality data, we investigate whether smal...
2024-05-30T23:50:20Z
null
null
null
Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference Models
['Zachary Ankner', 'Cody Blakeney', 'Kartik K. Sreenivasan', 'Max Marion', 'Matthew L. Leavitt', 'Mansheej Paul']
2,024
International Conference on Learning Representations
34
60
['Computer Science']
2,405.20768
Expanded Gating Ranges Improve Activation Functions
['Allen Hao Huang']
['cs.NE', 'cs.LG']
Activation functions are core components of all deep learning architectures. Currently, the most popular activation functions are smooth ReLU variants like GELU and SiLU. These are self-gated activation functions where the range of the gating function is between zero and one. In this paper, we explore the viability of ...
2024-05-25T09:12:17Z
null
null
null
null
null
null
null
null
null
null
2,405.20797
Ovis: Structural Embedding Alignment for Multimodal Large Language Model
['Shiyin Lu', 'Yang Li', 'Qing-Guo Chen', 'Zhao Xu', 'Weihua Luo', 'Kaifu Zhang', 'Han-Jia Ye']
['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG']
Current Multimodal Large Language Models (MLLMs) typically integrate a pre-trained LLM with another pre-trained vision transformer through a connector, such as an MLP, endowing the LLM with visual capabilities. However, the misalignment between two embedding strategies in MLLMs -- the structural textual embeddings base...
2024-05-31T13:59:18Z
null
null
null
null
null
null
null
null
null
null
2,405.21028
LACIE: Listener-Aware Finetuning for Confidence Calibration in Large Language Models
['Elias Stengel-Eskin', 'Peter Hase', 'Mohit Bansal']
['cs.CL', 'cs.AI']
When answering questions, LLMs can convey not only an answer, but a level of confidence about the answer being correct. This includes explicit confidence markers (e.g. giving a numeric score) as well as implicit markers, like an authoritative tone or elaborating with additional knowledge. For LLMs to be trustworthy kno...
2024-05-31T17:16:38Z
18 pages. Code: https://github.com/esteng/pragmatic_calibration
null
null
null
null
null
null
null
null
null
2,405.21046
Exploratory Preference Optimization: Harnessing Implicit Q*-Approximation for Sample-Efficient RLHF
['Tengyang Xie', 'Dylan J. Foster', 'Akshay Krishnamurthy', 'Corby Rosset', 'Ahmed Awadallah', 'Alexander Rakhlin']
['cs.LG', 'cs.AI', 'cs.CL', 'stat.ML']
Reinforcement learning from human feedback (RLHF) has emerged as a central tool for language model alignment. We consider online exploration in RLHF, which exploits interactive access to human or AI feedback by deliberately encouraging the model to produce diverse, maximally informative responses. By allowing RLHF to c...
2024-05-31T17:39:06Z
null
null
null
Exploratory Preference Optimization: Harnessing Implicit Q*-Approximation for Sample-Efficient RLHF
['Tengyang Xie', 'Dylan J. Foster', 'Akshay Krishnamurthy', 'Corby Rosset', 'Ahmed Awadallah', 'A. Rakhlin']
2,024
arXiv.org
45
73
['Computer Science', 'Mathematics']