arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,406.06371
mHuBERT-147: A Compact Multilingual HuBERT Model
['Marcely Zanon Boito', 'Vivek Iyer', 'Nikolaos Lagos', 'Laurent Besacier', 'Ioan Calapodescu']
['cs.CL', 'cs.SD', 'eess.AS']
We present mHuBERT-147, the first general-purpose massively multilingual HuBERT speech representation model trained on 90K hours of clean, open-license data. To scale up the multi-iteration HuBERT approach, we use faiss-based clustering, achieving 5.2x faster label assignment than the original method. We also apply a n...
2024-06-10T15:32:42Z
Extended version of the Interspeech 2024 paper of same name
null
null
null
null
null
null
null
null
null
2,406.06419
Foundation Inference Models for Markov Jump Processes
['David Berghaus', 'Kostadin Cvejoski', 'Patrick Seifner', 'Cesar Ojeda', 'Ramses J. Sanchez']
['cs.LG', 'stat.ML']
Markov jump processes are continuous-time stochastic processes which describe dynamical systems evolving in discrete state spaces. These processes find wide application in the natural sciences and machine learning, but their inference is known to be far from trivial. In this work we introduce a methodology for zero-sho...
2024-06-10T16:12:00Z
null
null
null
Foundation Inference Models for Markov Jump Processes
['David Berghaus', 'K. Cvejoski', 'Patrick Seifner', 'C. Ojeda', 'Ramsés J. Sánchez']
2,024
Neural Information Processing Systems
1
49
['Computer Science', 'Mathematics']
2,406.06424
Margin-aware Preference Optimization for Aligning Diffusion Models without Reference
['Jiwoo Hong', 'Sayak Paul', 'Noah Lee', 'Kashif Rasul', 'James Thorne', 'Jongheon Jeong']
['cs.CV']
Modern alignment techniques based on human preferences, such as RLHF and DPO, typically employ divergence regularization relative to the reference model to ensure training stability. However, this often limits the flexibility of models during alignment, especially when there is a clear distributional discrepancy betwee...
2024-06-10T16:14:45Z
Preprint
null
null
Margin-aware Preference Optimization for Aligning Diffusion Models without Reference
['Jiwoo Hong', 'Sayak Paul', 'Noah Lee', 'Kashif Rasul', 'James Thorne', 'Jongheon Jeong']
2,024
arXiv.org
18
95
['Computer Science']
2,406.06484
Parallelizing Linear Transformers with the Delta Rule over Sequence Length
['Songlin Yang', 'Bailin Wang', 'Yu Zhang', 'Yikang Shen', 'Yoon Kim']
['cs.LG', 'cs.CL']
Transformers with linear attention (i.e., linear transformers) and state-space models have recently been suggested as a viable linear-time alternative to transformers with softmax attention. However, these models still underperform transformers especially on tasks that require in-context retrieval. While more expressiv...
2024-06-10T17:24:42Z
Final camera ready
null
null
Parallelizing Linear Transformers with the Delta Rule over Sequence Length
['Songlin Yang', 'Bailin Wang', 'Yu Zhang', 'Yikang Shen', 'Yoon Kim']
2,024
Neural Information Processing Systems
89
135
['Computer Science']
2,406.06496
Direct Preference Optimization for Suppressing Hallucinated Prior Exams in Radiology Report Generation
['Oishi Banerjee', 'Hong-Yu Zhou', 'Subathra Adithan', 'Stephen Kwak', 'Kay Wu', 'Pranav Rajpurkar']
['cs.LG', 'cs.CL', 'cs.CV']
Recent advances in generative vision-language models (VLMs) have exciting potential implications for AI in radiology, yet VLMs are also known to produce hallucinations, nonsensical text, and other unwanted behaviors that can waste clinicians' time and cause patient harm. Drawing on recent work on direct preference opti...
2024-06-10T17:31:36Z
Added acknowledgemnts
null
null
Direct Preference Optimization for Suppressing Hallucinated Prior Exams in Radiology Report Generation
['Oishi Banerjee', 'Hong-Yu Zhou', 'Subathra Adithan', 'Stephen Kwak', 'Kay Wu', 'P. Rajpurkar']
2,024
arXiv.org
3
29
['Computer Science']
2,406.06512
Merlin: A Vision Language Foundation Model for 3D Computed Tomography
['Louis Blankemeier', 'Joseph Paul Cohen', 'Ashwin Kumar', 'Dave Van Veen', 'Syed Jamal Safdar Gardezi', 'Magdalini Paschali', 'Zhihong Chen', 'Jean-Benoit Delbrouck', 'Eduardo Reis', 'Cesar Truyts', 'Christian Bluethgen', 'Malte Engmann Kjeldskov Jensen', 'Sophie Ostmeier', 'Maya Varma', 'Jeya Maria Jose Valanarasu', ...
['cs.CV', 'cs.AI']
Over 85 million computed tomography (CT) scans are performed annually in the US, of which approximately one quarter focus on the abdomen. Given the current radiologist shortage, there is a large impetus to use artificial intelligence to alleviate the burden of interpreting these complex imaging studies. Prior state-of-...
2024-06-10T17:53:01Z
18 pages, 7 figures
null
null
Merlin: A Vision Language Foundation Model for 3D Computed Tomography
['Louis Blankemeier', 'Joseph Paul Cohen', 'Ashwin Kumar', 'Dave Van Veen', 'Syed Jamal Safdar Gardezi', 'Magdalini Paschali', 'Zhihong Chen', 'Jean-Benoit Delbrouck', 'E. Reis', 'C. Truyts', 'Christian Bluethgen', 'Malte E. K. Jensen', 'Sophie Ostmeier', 'Maya Varma', 'Jeya Maria Jose Valanarasu', 'Zhongnan Fang', 'Ze...
2,024
Research Square
41
94
['Computer Science', 'Medicine']
2,406.06525
Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation
['Peize Sun', 'Yi Jiang', 'Shoufa Chen', 'Shilong Zhang', 'Bingyue Peng', 'Ping Luo', 'Zehuan Yuan']
['cs.CV']
We introduce LlamaGen, a new family of image generation models that apply original ``next-token prediction'' paradigm of large language models to visual generation domain. It is an affirmative answer to whether vanilla autoregressive models, e.g., Llama, without inductive biases on visual signals can achieve state-of-t...
2024-06-10T17:59:52Z
Codes and models: \url{https://github.com/FoundationVision/LlamaGen}
null
null
Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation
['Peize Sun', 'Yi Jiang', 'Shoufa Chen', 'Shilong Zhang', 'Bingyue Peng', 'Ping Luo', 'Zehuan Yuan']
2,024
arXiv.org
301
98
['Computer Science']
2,406.06526
Generative Gaussian Splatting for Unbounded 3D City Generation
['Haozhe Xie', 'Zhaoxi Chen', 'Fangzhou Hong', 'Ziwei Liu']
['cs.CV']
3D city generation with NeRF-based methods shows promising generation results but is computationally inefficient. Recently 3D Gaussian Splatting (3D-GS) has emerged as a highly efficient alternative for object-level 3D generation. However, adapting 3D-GS from finite-scale 3D objects and humans to infinite-scale 3D citi...
2024-06-10T17:59:55Z
CVPR 2025. Project Page: https://haozhexie.com/project/gaussian-city
null
null
Generative Gaussian Splatting for Unbounded 3D City Generation
['Haozhe Xie', 'Zhaoxi Chen', 'Fangzhou Hong', 'Ziwei Liu']
2,024
Computer Vision and Pattern Recognition
12
66
['Computer Science']
2,406.06561
Brainstorming Brings Power to Large Language Models of Knowledge Reasoning
['Zining Qin', 'Chenhao Wang', 'Huiling Qin', 'Weijia Jia']
['cs.CL', 'cs.AI']
Large Language Models (LLMs) have demonstrated amazing capabilities in language generation, text comprehension, and knowledge reasoning. While a single powerful model can already handle multiple tasks, relying on a single perspective can lead to biased and unstable results. Recent studies have further improved the mode...
2024-06-02T14:47:14Z
null
null
null
null
null
null
null
null
null
null
2,406.06563
Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models
['Tianwen Wei', 'Bo Zhu', 'Liang Zhao', 'Cheng Cheng', 'Biye Li', 'Weiwei Lü', 'Peng Cheng', 'Jianhao Zhang', 'Xiaoyu Zhang', 'Liang Zeng', 'Xiaokun Wang', 'Yutuan Ma', 'Rui Hu', 'Shuicheng Yan', 'Han Fang', 'Yahui Zhou']
['cs.CL', 'cs.AI']
In this technical report, we introduce the training methodologies implemented in the development of Skywork-MoE, a high-performance mixture-of-experts (MoE) large language model (LLM) with 146 billion parameters and 16 experts. It is initialized from the pre-existing dense checkpoints of our Skywork-13B model. We explo...
2024-06-03T03:58:41Z
null
null
null
null
null
null
null
null
null
null
2,406.06592
Improve Mathematical Reasoning in Language Models by Automated Process Supervision
['Liangchen Luo', 'Yinxiao Liu', 'Rosanne Liu', 'Samrat Phatale', 'Meiqi Guo', 'Harsh Lara', 'Yunxuan Li', 'Lei Shu', 'Yun Zhu', 'Lei Meng', 'Jiao Sun', 'Abhinav Rastogi']
['cs.CL', 'cs.LG']
Complex multi-step reasoning tasks, such as solving mathematical problems or generating code, remain a significant hurdle for even the most advanced large language models (LLMs). Verifying LLM outputs with an Outcome Reward Model (ORM) is a standard inference-time technique aimed at enhancing the reasoning performance ...
2024-06-05T19:25:40Z
17 pages, 5 figures, 2 table
null
null
Improve Mathematical Reasoning in Language Models by Automated Process Supervision
['Liangchen Luo', 'Yinxiao Liu', 'Rosanne Liu', 'Samrat Phatale', 'Harsh Lara', 'Yunxuan Li', 'Lei Shu', 'Yun Zhu', 'Lei Meng', 'Jiao Sun', 'Abhinav Rastogi']
2,024
arXiv.org
193
27
['Computer Science']
2,406.06612
SEE-2-SOUND: Zero-Shot Spatial Environment-to-Spatial Sound
['Rishit Dagli', 'Shivesh Prakash', 'Robert Wu', 'Houman Khosravani']
['cs.CV', 'cs.LG', 'cs.SD', 'eess.AS']
Generating combined visual and auditory sensory experiences is critical for the consumption of immersive content. Recent advances in neural generative models have enabled the creation of high-resolution content across multiple modalities such as images, text, speech, and videos. Despite these successes, there remains a...
2024-06-06T22:55:01Z
Project Page: https://see2sound.github.io/
null
null
null
null
null
null
null
null
null
2,406.06623
Spectrum: Targeted Training on Signal to Noise Ratio
['Eric Hartford', 'Lucas Atkins', 'Fernando Fernandes Neto', 'David Golchinfar']
['cs.LG', 'stat.ML']
Efficiently post-training large language models remains a challenging task due to the vast computational resources required. We present Spectrum, a method that accelerates LLM training by selectively targeting layer modules based on their signal-to-noise ratio (SNR), and freezing the remaining modules. Our approach, wh...
2024-06-07T21:20:57Z
null
null
null
null
null
null
null
null
null
null
2,406.0689
Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation
['Yuanhao Zhai', 'Kevin Lin', 'Zhengyuan Yang', 'Linjie Li', 'Jianfeng Wang', 'Chung-Ching Lin', 'David Doermann', 'Junsong Yuan', 'Lijuan Wang']
['cs.CV']
Image diffusion distillation achieves high-fidelity generation with very few sampling steps. However, applying these techniques directly to video diffusion often results in unsatisfactory frame quality due to the limited visual quality in public video datasets. This affects the performance of both teacher and student v...
2024-06-11T02:09:46Z
NeurIPS 2024; project page: https://yhzhai.github.io/mcm/
null
null
Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation
['Yuanhao Zhai', 'K. Lin', 'Zhengyuan Yang', 'Linjie Li', 'Jianfeng Wang', 'Chung-Ching Lin', 'David S. Doermann', 'Junsong Yuan', 'Lijuan Wang']
2,024
Neural Information Processing Systems
13
72
['Computer Science']
2,406.06973
RWKV-CLIP: A Robust Vision-Language Representation Learner
['Tiancheng Gu', 'Kaicheng Yang', 'Xiang An', 'Ziyong Feng', 'Dongnan Liu', 'Weidong Cai', 'Jiankang Deng']
['cs.CV']
Contrastive Language-Image Pre-training (CLIP) has significantly improved performance in various vision-language tasks by expanding the dataset with image-text pairs obtained from websites. This paper further explores CLIP from the perspectives of data and model architecture. To address the prevalence of noisy data and...
2024-06-11T06:10:46Z
14 pages, 10 figures, EMNLP2024 Main
null
null
null
null
null
null
null
null
null
2,406.06992
Scaling up masked audio encoder learning for general audio classification
['Heinrich Dinkel', 'Zhiyong Yan', 'Yongqing Wang', 'Junbo Zhang', 'Yujun Wang', 'Bin Wang']
['cs.SD', 'eess.AS']
Despite progress in audio classification, a generalization gap remains between speech and other sound domains, such as environmental sounds and music. Models trained for speech tasks often fail to perform well on environmental or musical audio tasks, and vice versa. While self-supervised (SSL) audio representations off...
2024-06-11T06:44:54Z
Interspeech 2024
null
null
null
null
null
null
null
null
null
2,406.07115
Advancing Tool-Augmented Large Language Models: Integrating Insights from Errors in Inference Trees
['Sijia Chen', 'Yibo Wang', 'Yi-Feng Wu', 'Qing-Guo Chen', 'Zhao Xu', 'Weihua Luo', 'Kaifu Zhang', 'Lijun Zhang']
['cs.CL', 'cs.AI', 'cs.LG']
Tool-augmented large language models (LLMs) leverage tools, often in the form of APIs, to improve their reasoning capabilities on complex tasks. This enables them to act as intelligent agents interacting with the real world. The recently introduced ToolLLaMA model by Qin et al. [2023] utilizes the depth-first search-ba...
2024-06-11T10:00:18Z
Accepted by NeurIPS 2024
null
null
Advancing Tool-Augmented Large Language Models: Integrating Insights from Errors in Inference Trees
['Sijia Chen', 'Yibo Wang', 'Yi-Feng Wu', 'Qing-Guo Chen', 'Zhao Xu', 'Weihua Luo', 'Kaifu Zhang', 'Lijun Zhang']
2,024
Neural Information Processing Systems
18
52
['Computer Science']
2,406.07188
Merging Improves Self-Critique Against Jailbreak Attacks
['Victor Gallego']
['cs.CL', 'cs.AI']
The robustness of large language models (LLMs) against adversarial manipulations, such as jailbreak attacks, remains a significant challenge. In this work, we propose an approach that enhances the self-critique capability of the LLM and further fine-tunes it over sanitized synthetic data. This is done with the addition...
2024-06-11T12:01:09Z
Published at ICML 2024 Workshop on Foundation Models in the Wild
null
null
null
null
null
null
null
null
null
2,406.07209
MS-Diffusion: Multi-subject Zero-shot Image Personalization with Layout Guidance
['Xierui Wang', 'Siming Fu', 'Qihan Huang', 'Wanggui He', 'Hao Jiang']
['cs.CV']
Recent advancements in text-to-image generation models have dramatically enhanced the generation of photorealistic images from textual prompts, leading to an increased interest in personalized text-to-image applications, particularly in multi-subject scenarios. However, these advances are hindered by two main challenge...
2024-06-11T12:32:53Z
null
null
null
MS-Diffusion: Multi-subject Zero-shot Image Personalization with Layout Guidance
['X. Wang', 'Siming Fu', 'Qihan Huang', 'Wanggui He', 'Hao Jiang']
2,024
International Conference on Learning Representations
53
57
['Computer Science']
2,406.07289
Can We Achieve High-quality Direct Speech-to-Speech Translation without Parallel Speech Data?
['Qingkai Fang', 'Shaolei Zhang', 'Zhengrui Ma', 'Min Zhang', 'Yang Feng']
['cs.CL', 'cs.AI', 'cs.SD', 'eess.AS', 'I.2.7']
Recently proposed two-pass direct speech-to-speech translation (S2ST) models decompose the task into speech-to-text translation (S2TT) and text-to-speech (TTS) within an end-to-end model, yielding promising results. However, the training of these models still relies on parallel speech data, which is extremely challengi...
2024-06-11T14:17:12Z
ACL 2024 main conference. Project Page: https://ictnlp.github.io/ComSpeech-Site/
null
null
null
null
null
null
null
null
null
2,406.07394
Accessing GPT-4 level Mathematical Olympiad Solutions via Monte Carlo Tree Self-refine with LLaMa-3 8B
['Di Zhang', 'Xiaoshui Huang', 'Dongzhan Zhou', 'Yuqiang Li', 'Wanli Ouyang']
['cs.AI']
This paper introduces the MCT Self-Refine (MCTSr) algorithm, an innovative integration of Large Language Models (LLMs) with Monte Carlo Tree Search (MCTS), designed to enhance performance in complex mathematical reasoning tasks. Addressing the challenges of accuracy and reliability in LLMs, particularly in strategic an...
2024-06-11T16:01:07Z
null
null
null
null
null
null
null
null
null
null
2,406.07461
Noise-robust Speech Separation with Fast Generative Correction
['Helin Wang', 'Jesus Villalba', 'Laureano Moro-Velazquez', 'Jiarui Hai', 'Thomas Thebaud', 'Najim Dehak']
['eess.AS']
Speech separation, the task of isolating multiple speech sources from a mixed audio signal, remains challenging in noisy environments. In this paper, we propose a generative correction method to enhance the output of a discriminative separator. By leveraging a generative corrector based on a diffusion model, we refine ...
2024-06-11T17:08:21Z
Accepted at INTERSPEECH 2024
null
null
null
null
null
null
null
null
null
2,406.07476
VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs
['Zesen Cheng', 'Sicong Leng', 'Hang Zhang', 'Yifei Xin', 'Xin Li', 'Guanzheng Chen', 'Yongxin Zhu', 'Wenqi Zhang', 'Ziyang Luo', 'Deli Zhao', 'Lidong Bing']
['cs.CV', 'cs.CL']
In this paper, we present the VideoLLaMA 2, a set of Video Large Language Models (Video-LLMs) designed to enhance spatial-temporal modeling and audio understanding in video and audio-oriented tasks. Building upon its predecessor, VideoLLaMA 2 incorporates a tailor-made Spatial-Temporal Convolution (STC) connector, whic...
2024-06-11T17:22:23Z
ZC, SL, HZ, YX, and XL contributed equally to this project. Code: https://github.com/DAMO-NLP-SG/VideoLLaMA2
null
null
null
null
null
null
null
null
null
2,406.07505
THaLLE: Text Hyperlocally Augmented Large Language Extension -- Technical Report
['KBTG Labs', 'Danupat Khamnuansin', 'Atthakorn Petchsod', 'Anuruth Lertpiya', 'Pornchanan Balee', 'Thanawat Lodkaew', 'Tawunrat Chalothorn', 'Thadpong Pongthawornkamol', 'Monchai Lertsutthiwong']
['cs.CL']
Recent advancements in Large Language Models (LLMs) have revealed new capabilities and opportunities across the technological landscape. However, the practicality of very large LLMs is challenged by their high compute cost, which does not justify the benefits given their limited capability compared to humans. While sma...
2024-06-11T17:40:00Z
null
null
null
THaLLE: Text Hyperlocally Augmented Large Language Extension - Technical Report
['Kbtg Labs', 'Danupat Khamnuansin', 'Atthakorn Petchsod', 'Anuruth Lertpiya', 'Pornchanan Balee', 'Thanawat Lodkaew', 'Tawunrat Chalothorn', 'Thadpong Pongthawornkamol', 'Monchai Lertsutthiwong']
2,024
arXiv.org
1
7
['Computer Science']
2,406.07522
Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling
['Liliang Ren', 'Yang Liu', 'Yadong Lu', 'Yelong Shen', 'Chen Liang', 'Weizhu Chen']
['cs.CL', 'cs.LG']
Efficiently modeling sequences with infinite context length has long been a challenging problem. Previous approaches have either suffered from quadratic computational complexity or limited extrapolation ability in length generalization. In this work, we present Samba, a simple hybrid architecture that layer-wise combin...
2024-06-11T17:50:51Z
Accepted by ICLR 2025. Camera-ready Version
null
null
null
null
null
null
null
null
null
2,406.07524
Simple and Effective Masked Diffusion Language Models
['Subham Sekhar Sahoo', 'Marianne Arriola', 'Yair Schiff', 'Aaron Gokaslan', 'Edgar Marroquin', 'Justin T Chiu', 'Alexander Rush', 'Volodymyr Kuleshov']
['cs.CL', 'cs.AI', 'cs.LG']
While diffusion models excel at generating high-quality images, prior work reports a significant performance gap between diffusion and autoregressive (AR) methods in language modeling. In this work, we show that simple masked discrete diffusion is more performant than previously thought. We apply an effective training ...
2024-06-11T17:51:40Z
NeurIPS 2024. We provide the code at https://github.com/kuleshov-group/mdlm
null
null
null
null
null
null
null
null
null
2,406.07543
Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning
['Chenyu Yang', 'Xizhou Zhu', 'Jinguo Zhu', 'Weijie Su', 'Junjie Wang', 'Xuan Dong', 'Wenhai Wang', 'Lewei Lu', 'Bin Li', 'Jie Zhou', 'Yu Qiao', 'Jifeng Dai']
['cs.CV']
Recently, vision model pre-training has evolved from relying on manually annotated datasets to leveraging large-scale, web-crawled image-text data. Despite these advances, there is no pre-training method that effectively exploits the interleaved image-text data, which is very prevalent on the Internet. Inspired by the ...
2024-06-11T17:59:35Z
null
null
null
null
null
null
null
null
null
null
2,406.07547
Zero-shot Image Editing with Reference Imitation
['Xi Chen', 'Yutong Feng', 'Mengting Chen', 'Yiyang Wang', 'Shilong Zhang', 'Yu Liu', 'Yujun Shen', 'Hengshuang Zhao']
['cs.CV']
Image editing serves as a practical yet challenging task considering the diverse demands from users, where one of the hardest parts is to precisely describe how the edited image should look like. In this work, we present a new form of editing, termed imitative editing, to help users exercise their creativity more conve...
2024-06-11T17:59:51Z
https://xavierchen34.github.io/MimicBrush-Page
null
null
null
null
null
null
null
null
null
2,406.0755
An Image is Worth 32 Tokens for Reconstruction and Generation
['Qihang Yu', 'Mark Weber', 'Xueqing Deng', 'Xiaohui Shen', 'Daniel Cremers', 'Liang-Chieh Chen']
['cs.CV']
Recent advancements in generative models have highlighted the crucial role of image tokenization in the efficient synthesis of high-resolution images. Tokenization, which transforms images into latent representations, reduces computational demands compared to directly processing pixels and enhances the effectiveness an...
2024-06-11T17:59:56Z
A compact 1D Image Tokenization method, leading to SOTA generation performance while being substantially faster. Project page at https://yucornetto.github.io/projects/titok.html
null
null
null
null
null
null
null
null
null
2,406.07599
CTIBench: A Benchmark for Evaluating LLMs in Cyber Threat Intelligence
['Md Tanvirul Alam', 'Dipkamal Bhusal', 'Le Nguyen', 'Nidhi Rastogi']
['cs.CR', 'cs.AI']
Cyber threat intelligence (CTI) is crucial in today's cybersecurity landscape, providing essential insights to understand and mitigate the ever-evolving cyber threats. The recent rise of Large Language Models (LLMs) have shown potential in this domain, but concerns about their reliability, accuracy, and hallucinations ...
2024-06-11T16:42:02Z
null
null
null
CTIBench: A Benchmark for Evaluating LLMs in Cyber Threat Intelligence
['Md Tanvirul Alam', 'Dipkamal Bhusal', 'Le Nguyen', 'Nidhi Rastogi']
2,024
Neural Information Processing Systems
22
50
['Computer Science']
2,406.07815
Are Large Language Models Good Statisticians?
['Yizhang Zhu', 'Shiyin Du', 'Boyan Li', 'Yuyu Luo', 'Nan Tang']
['cs.CL', 'cs.AI']
Large Language Models (LLMs) have demonstrated impressive capabilities across a range of scientific tasks including mathematics, physics, and chemistry. Despite their successes, the effectiveness of LLMs in handling complex statistical tasks remains systematically under-explored. To bridge this gap, we introduce StatQA...
2024-06-12T02:23:51Z
Accepted by NeurIPS 2024 D&B. 34 pages, 11 figures, 21 tables
null
null
null
null
null
null
null
null
null
2,406.07835
SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature
['David Wadden', 'Kejian Shi', 'Jacob Morrison', 'Aakanksha Naik', 'Shruti Singh', 'Nitzan Barzilay', 'Kyle Lo', 'Tom Hope', 'Luca Soldaini', 'Shannon Zejiang Shen', 'Doug Downey', 'Hannaneh Hajishirzi', 'Arman Cohan']
['cs.CL', 'cs.AI']
We present SciRIFF (Scientific Resource for Instruction-Following and Finetuning), a dataset of 137K instruction-following demonstrations for 54 tasks covering five essential scientific literature understanding capabilities: information extraction, summarization, question answering, claim verification, and classificati...
2024-06-10T21:22:08Z
Submitted to NeurIPS Datasets and Benchmarks 2024
null
null
null
null
null
null
null
null
null
2,406.07887
An Empirical Study of Mamba-based Language Models
['Roger Waleffe', 'Wonmin Byeon', 'Duncan Riach', 'Brandon Norick', 'Vijay Korthikanti', 'Tri Dao', 'Albert Gu', 'Ali Hatamizadeh', 'Sudhakar Singh', 'Deepak Narayanan', 'Garvit Kulshreshtha', 'Vartika Singh', 'Jared Casper', 'Jan Kautz', 'Mohammad Shoeybi', 'Bryan Catanzaro']
['cs.LG', 'cs.CL']
Selective state-space models (SSMs) like Mamba overcome some of the shortcomings of Transformers, such as quadratic computational complexity with sequence length and large inference-time memory requirements from the key-value cache. Moreover, recent studies have shown that SSMs can match or exceed the language modeling...
2024-06-12T05:25:15Z
null
null
null
An Empirical Study of Mamba-based Language Models
['R. Waleffe', 'Wonmin Byeon', 'Duncan Riach', 'Brandon Norick', 'V. Korthikanti', 'Tri Dao', 'Albert Gu', 'Ali Hatamizadeh', 'Sudhakar Singh', 'Deepak Narayanan', 'Garvit Kulshreshtha', 'Vartika Singh', 'J. Casper', 'Jan Kautz', 'M. Shoeybi', 'Bryan Catanzaro']
2,024
arXiv.org
79
53
['Computer Science']
2,406.08055
Learning Job Title Representation from Job Description Aggregation Network
['Napat Laosaengpha', 'Thanit Tativannarat', 'Chawan Piansaddhayanon', 'Attapol Rutherford', 'Ekapol Chuangsuwanich']
['cs.CL']
Learning job title representation is a vital process for developing automatic human resource tools. To do so, existing methods primarily rely on learning the title representation through skills extracted from the job description, neglecting the rich and diverse content within. Thus, we propose an alternative framework ...
2024-06-12T10:12:52Z
to be published in Findings of the Association for Computational Linguistics: ACL 2024
null
null
Learning Job Title Representation from Job Description Aggregation Network
['Napat Laosaengpha', 'Thanit Tativannarat', 'Chawan Piansaddhayanon', 'Attapol Rutherford', 'Ekapol Chuangsuwanich']
2,024
Annual Meeting of the Association for Computational Linguistics
1
31
['Computer Science']
2,406.08085
Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams
['Haoji Zhang', 'Yiqin Wang', 'Yansong Tang', 'Yong Liu', 'Jiashi Feng', 'Jifeng Dai', 'Xiaojie Jin']
['cs.CV']
Benefiting from the advancements in large language models and cross-modal alignment, existing multi-modal video understanding methods have achieved prominent performance in offline scenario. However, online video streams, as one of the most common media forms in the real world, have seldom received attention. Compared ...
2024-06-12T11:07:55Z
null
null
null
Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams
['Haoji Zhang', 'Yiqin Wang', 'Yansong Tang', 'Yong Liu', 'Jiashi Feng', 'Jifeng Dai', 'Xiaojie Jin']
2,024
arXiv.org
45
50
['Computer Science']
2,406.081
Multimodal Table Understanding
['Mingyu Zheng', 'Xinwei Feng', 'Qingyi Si', 'Qiaoqiao She', 'Zheng Lin', 'Wenbin Jiang', 'Weiping Wang']
['cs.CL', 'cs.AI']
Although great progress has been made by previous table understanding methods including recent approaches based on large language models (LLMs), they rely heavily on the premise that given tables must be converted into a certain text sequence (such as Markdown or HTML) to serve as model input. However, it is difficult ...
2024-06-12T11:27:03Z
23 pages, 16 figures, ACL 2024 main conference, camera-ready version
null
null
Multimodal Table Understanding
['Mingyu Zheng', 'Xinwei Feng', 'Q. Si', 'Qiaoqiao She', 'Zheng Lin', 'Wenbin Jiang', 'Weiping Wang']
2,024
Annual Meeting of the Association for Computational Linguistics
20
60
['Computer Science']
2,406.08164
ConMe: Rethinking Evaluation of Compositional Reasoning for Modern VLMs
['Irene Huang', 'Wei Lin', 'M. Jehanzeb Mirza', 'Jacob A. Hansen', 'Sivan Doveh', 'Victor Ion Butoi', 'Roei Herzig', 'Assaf Arbelle', 'Hilde Kuehne', 'Trevor Darrell', 'Chuang Gan', 'Aude Oliva', 'Rogerio Feris', 'Leonid Karlinsky']
['cs.CV']
Compositional Reasoning (CR) entails grasping the significance of attributes, relations, and word order. Recent Vision-Language Models (VLMs), comprising a visual encoder and a Large Language Model (LLM) decoder, have demonstrated remarkable proficiency in such reasoning tasks. This prompts a crucial question: have VLM...
2024-06-12T12:54:27Z
NeurIPS 2024 Camera Ready
null
null
ConMe: Rethinking Evaluation of Compositional Reasoning for Modern VLMs
['Irene Huang', 'Wei Lin', 'M. J. Mirza', 'Jacob Hansen', 'Sivan Doveh', 'V. Butoi', 'Roei Herzig', 'Assaf Arbelle', 'Hilde Kuhene', 'Trevor Darrel', 'Chuang Gan', 'Aude Oliva', 'Rogério Feris', 'Leonid Karlinsky']
2,024
Neural Information Processing Systems
9
64
['Computer Science']
2,406.0831
GraphFM: A Comprehensive Benchmark for Graph Foundation Model
['Yuhao Xu', 'Xinqi Liu', 'Keyu Duan', 'Yi Fang', 'Yu-Neng Chuang', 'Daochen Zha', 'Qiaoyu Tan']
['cs.LG']
Foundation Models (FMs) serve as a general class for the development of artificial intelligence systems, offering broad potential for generalization across a spectrum of downstream tasks. Despite extensive research into self-supervised learning as the cornerstone of FMs, several outstanding issues persist in Graph Foun...
2024-06-12T15:10:44Z
null
null
null
null
null
null
null
null
null
null
2,406.08391
Large Language Models Must Be Taught to Know What They Don't Know
['Sanyam Kapoor', 'Nate Gruver', 'Manley Roberts', 'Katherine Collins', 'Arka Pal', 'Umang Bhatt', 'Adrian Weller', 'Samuel Dooley', 'Micah Goldblum', 'Andrew Gordon Wilson']
['cs.LG', 'cs.AI', 'cs.CL', 'stat.ML']
When using large language models (LLMs) in high-stakes applications, we need to know when we can trust their predictions. Some works argue that prompting high-performance LLMs is sufficient to produce calibrated uncertainties, while others introduce sampling methods that can be prohibitively expensive. In this work, we...
2024-06-12T16:41:31Z
NeurIPS 2024 Camera Ready
null
null
null
null
null
null
null
null
null
2,406.08414
Discovering Preference Optimization Algorithms with and for Large Language Models
['Chris Lu', 'Samuel Holt', 'Claudio Fanconi', 'Alex J. Chan', 'Jakob Foerster', 'Mihaela van der Schaar', 'Robert Tjarko Lange']
['cs.LG']
Offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs. Typically, preference optimization is approached as an offline supervised learning task using manually-crafted convex loss functions. While these methods are based on theoretical insights, th...
2024-06-12T16:58:41Z
null
null
null
Discovering Preference Optimization Algorithms with and for Large Language Models
['Chris Lu', 'Samuel Holt', 'Claudio Fanconi', 'Alex J. Chan', 'J. Foerster', 'M. Schaar', 'R. T. Lange']
2,024
Neural Information Processing Systems
18
84
['Computer Science']
2,406.08418
OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text
['Qingyun Li', 'Zhe Chen', 'Weiyun Wang', 'Wenhai Wang', 'Shenglong Ye', 'Zhenjiang Jin', 'Guanzhou Chen', 'Yinan He', 'Zhangwei Gao', 'Erfei Cui', 'Jiashuo Yu', 'Hao Tian', 'Jiasheng Zhou', 'Chao Xu', 'Bin Wang', 'Xingjian Wei', 'Wei Li', 'Wenjian Zhang', 'Bo Zhang', 'Pinlong Cai', 'Licheng Wen', 'Xiangchao Yan', 'Zhe...
['cs.CV', 'cs.AI']
Image-text interleaved data, consisting of multiple images and texts arranged in a natural document format, aligns with the presentation paradigm of internet data and closely resembles human reading habits. Recent studies have shown that such data aids multimodal in-context learning and maintains the capabilities of la...
2024-06-12T17:01:04Z
null
null
null
null
null
null
null
null
null
null
2,406.08446
OLMES: A Standard for Language Model Evaluations
['Yuling Gu', 'Oyvind Tafjord', 'Bailey Kuehl', 'Dany Haddad', 'Jesse Dodge', 'Hannaneh Hajishirzi']
['cs.CL', 'cs.AI']
Progress in AI is often demonstrated by new models claiming improved performance on tasks measuring model capabilities. Evaluating language models can be particularly challenging, as choices of how a model is evaluated on a task can lead to large changes in measured performance. There is no common standard setup, so di...
2024-06-12T17:37:09Z
Findings of NAACL 2025
null
null
OLMES: A Standard for Language Model Evaluations
['Yuling Gu', 'Oyvind Tafjord', 'Bailey Kuehl', 'Dany Haddad', 'Jesse Dodge', 'Hanna Hajishirzi']
2,024
North American Chapter of the Association for Computational Linguistics
20
50
['Computer Science']
2,406.08464
Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing
['Zhangchen Xu', 'Fengqing Jiang', 'Luyao Niu', 'Yuntian Deng', 'Radha Poovendran', 'Yejin Choi', 'Bill Yuchen Lin']
['cs.CL', 'cs.AI']
High-quality instruction data is critical for aligning large language models (LLMs). Although some models, such as Llama-3-Instruct, have open weights, their alignment data remain private, which hinders the democratization of AI. High human labor costs and a limited, predefined scope for prompting prevent existing open...
2024-06-12T17:52:30Z
Link: https://magpie-align.github.io/
null
null
null
null
null
null
null
null
null
2,406.08478
What If We Recaption Billions of Web Images with LLaMA-3?
['Xianhang Li', 'Haoqin Tu', 'Mude Hui', 'Zeyu Wang', 'Bingchen Zhao', 'Junfei Xiao', 'Sucheng Ren', 'Jieru Mei', 'Qing Liu', 'Huangjie Zheng', 'Yuyin Zhou', 'Cihang Xie']
['cs.CV', 'cs.CL']
Web-crawled image-text pairs are inherently noisy. Prior studies demonstrate that semantically aligning and enriching textual descriptions of these pairs can significantly enhance model training across various vision-language tasks, particularly text-to-image generation. However, large-scale investigations in this area...
2024-06-12T17:59:07Z
First five authors contributed equally
null
null
null
null
null
null
null
null
null
2,406.08479
Real3D: Scaling Up Large Reconstruction Models with Real-World Images
['Hanwen Jiang', 'Qixing Huang', 'Georgios Pavlakos']
['cs.CV']
The default strategy for training single-view Large Reconstruction Models (LRMs) follows the fully supervised route using large-scale datasets of synthetic 3D assets or multi-view captures. Although these resources simplify the training procedure, they are hard to scale up beyond the existing datasets and they are not ...
2024-06-12T17:59:08Z
Project page: https://hwjiang1510.github.io/Real3D/
null
null
null
null
null
null
null
null
null
2,406.08487
Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models
['Yi-Fan Zhang', 'Qingsong Wen', 'Chaoyou Fu', 'Xue Wang', 'Zhang Zhang', 'Liang Wang', 'Rong Jin']
['cs.CV']
Seeing clearly with high resolution is a foundation of Large Multimodal Models (LMMs), which has been proven to be vital for visual perception and reasoning. Existing works usually employ a straightforward resolution upscaling method, where the image consists of global and local branches, with the latter being the slic...
2024-06-12T17:59:49Z
Project page: https://github.com/yfzhang114/SliME
null
null
null
null
null
null
null
null
null
2,406.08657
Mistral-C2F: Coarse to Fine Actor for Analytical and Reasoning Enhancement in RLHF and Effective-Merged LLMs
['Chen Zheng', 'Ke Sun', 'Xun Zhou']
['cs.CL']
Despite the advances in Large Language Models (LLMs), exemplified by models like GPT-4 and Claude, smaller-scale LLMs such as Llama and Mistral often struggle with generating in-depth and coherent dialogues. This paper presents a novel two-step Coarse-to-Fine Actor model to address the inherent limitations in conversat...
2024-06-12T21:42:13Z
null
null
null
null
null
null
null
null
null
null
2,406.08673
HelpSteer2: Open-source dataset for training top-performing reward models
['Zhilin Wang', 'Yi Dong', 'Olivier Delalleau', 'Jiaqi Zeng', 'Gerald Shen', 'Daniel Egert', 'Jimmy J. Zhang', 'Makesh Narsimhan Sreedhar', 'Oleksii Kuchaiev']
['cs.CL', 'cs.AI', 'cs.LG']
High-quality preference datasets are essential for training reward models that can effectively guide large language models (LLMs) in generating high-quality responses aligned with human preferences. As LLMs become stronger and better aligned, permissively licensed preference datasets, such as Open Assistant, HH-RLHF, a...
2024-06-12T22:28:08Z
null
null
null
null
null
null
null
null
null
null
2,406.08707
mOSCAR: A Large-scale Multilingual and Multimodal Document-level Corpus
['Matthieu Futeral', 'Armel Zebaze', 'Pedro Ortiz Suarez', 'Julien Abadji', 'Rémi Lacroix', 'Cordelia Schmid', 'Rachel Bawden', 'Benoît Sagot']
['cs.CL', 'cs.CV']
Multimodal Large Language Models (mLLMs) are trained on a large amount of text-image data. While most mLLMs are trained on caption-like data only, Alayrac et al. (2022) showed that additionally training them on interleaved sequences of text and images can lead to the emergence of in-context learning capabilities. Howev...
2024-06-13T00:13:32Z
ACL 2025 (Findings)
null
null
mOSCAR: A Large-scale Multilingual and Multimodal Document-level Corpus
['Matthieu Futeral', 'A. Zebaze', 'Pedro Ortiz Suarez', 'Julien Abadji', "R'emi Lacroix", 'Cordelia Schmid', 'Rachel Bawden', 'Benoît Sagot']
2,024
arXiv.org
3
107
['Computer Science']
2,406.08801
Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation
['Mingwang Xu', 'Hui Li', 'Qingkun Su', 'Hanlin Shang', 'Liwei Zhang', 'Ce Liu', 'Jingdong Wang', 'Yao Yao', 'Siyu Zhu']
['cs.CV']
The field of portrait image animation, driven by speech audio input, has experienced significant advancements in the generation of realistic and dynamic portraits. This research delves into the complexities of synchronizing facial movements and creating visually appealing, temporally consistent animations within the fr...
2024-06-13T04:33:20Z
20 pages
null
null
Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation
['Mingwang Xu', 'Hui Li', 'Qingkun Su', 'Hanlin Shang', 'Liwei Zhang', 'Ce Liu', 'Jingdong Wang', 'Yao Yao', 'Siyu Zhu']
2,024
arXiv.org
90
54
['Computer Science']
2,406.08845
Rethinking Human Evaluation Protocol for Text-to-Video Models: Enhancing Reliability,Reproducibility, and Practicality
['Tianle Zhang', 'Langtian Ma', 'Yuchen Yan', 'Yuchen Zhang', 'Kai Wang', 'Yue Yang', 'Ziyao Guo', 'Wenqi Shao', 'Yang You', 'Yu Qiao', 'Ping Luo', 'Kaipeng Zhang']
['cs.CV']
Recent text-to-video (T2V) technology advancements, as demonstrated by models such as Gen2, Pika, and Sora, have significantly broadened its applicability and popularity. Despite these strides, evaluating these models poses substantial challenges. Primarily, due to the limitations inherent in automatic metrics, manual ...
2024-06-13T06:09:22Z
null
null
null
Rethinking Human Evaluation Protocol for Text-to-Video Models: Enhancing Reliability, Reproducibility, and Practicality
['Tianle Zhang', 'Langtian Ma', 'Yuchen Yan', 'Yuchen Zhang', 'Kai Wang', 'Yue Yang', 'Ziyao Guo', 'Wenqi Shao', 'Yang You', 'Yu Qiao', 'Ping Luo', 'Kaipeng Zhang']
2,024
Neural Information Processing Systems
2
131
['Computer Science']
2,406.0914
Investigating the translation capabilities of Large Language Models trained on parallel data only
['Javier García Gilabert', 'Carlos Escolano', 'Aleix Sant Savall', 'Francesca De Luca Fornaciari', 'Audrey Mash', 'Xixian Liao', 'Maite Melero']
['cs.CL']
In recent years, Large Language Models (LLMs) have demonstrated exceptional proficiency across a broad spectrum of Natural Language Processing (NLP) tasks, including Machine Translation. However, previous methods predominantly relied on iterative processes such as instruction fine-tuning or continual pre-training, leav...
2024-06-13T14:08:56Z
We release our code at: https://github.com/projecte-aina/Plume
null
null
null
null
null
null
null
null
null
2,406.09168
SR-CACO-2: A Dataset for Confocal Fluorescence Microscopy Image Super-Resolution
['Soufiane Belharbi', 'Mara KM Whitford', 'Phuong Hoang', 'Shakeeb Murtaza', 'Luke McCaffrey', 'Eric Granger']
['eess.IV', 'cs.CV', 'cs.LG']
Confocal fluorescence microscopy is one of the most accessible and widely used imaging techniques for the study of biological processes at the cellular and subcellular levels. Scanning confocal microscopy allows the capture of high-quality images from thick three-dimensional (3D) samples, yet suffers from well-known li...
2024-06-13T14:30:35Z
27 pages, 15 figures, NeurIPS 2024
null
null
null
null
null
null
null
null
null
2,406.09246
OpenVLA: An Open-Source Vision-Language-Action Model
['Moo Jin Kim', 'Karl Pertsch', 'Siddharth Karamcheti', 'Ted Xiao', 'Ashwin Balakrishna', 'Suraj Nair', 'Rafael Rafailov', 'Ethan Foster', 'Grace Lam', 'Pannag Sanketi', 'Quan Vuong', 'Thomas Kollar', 'Benjamin Burchfiel', 'Russ Tedrake', 'Dorsa Sadigh', 'Sergey Levine', 'Percy Liang', 'Chelsea Finn']
['cs.RO', 'cs.LG']
Large policies pretrained on a combination of Internet-scale vision-language data and diverse robot demonstrations have the potential to change how we teach robots new skills: rather than training new behaviors from scratch, we can fine-tune such vision-language-action (VLA) models to obtain robust, generalizable polic...
2024-06-13T15:46:55Z
Website: https://openvla.github.io/
null
null
OpenVLA: An Open-Source Vision-Language-Action Model
['Moo Jin Kim', 'Karl Pertsch', 'Siddharth Karamcheti', 'Ted Xiao', 'A. Balakrishna', 'Suraj Nair', 'Rafael Rafailov', 'Ethan Foster', 'Grace Lam', 'Pannag R. Sanketi', 'Quan Vuong', 'Thomas Kollar', 'Benjamin Burchfiel', 'Russ Tedrake', 'Dorsa Sadigh', 'Sergey Levine', 'Percy Liang', 'Chelsea Finn']
2,024
Conference on Robot Learning
535
110
['Computer Science']
2,406.09279
Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback
['Hamish Ivison', 'Yizhong Wang', 'Jiacheng Liu', 'Zeqiu Wu', 'Valentina Pyatkin', 'Nathan Lambert', 'Noah A. Smith', 'Yejin Choi', 'Hannaneh Hajishirzi']
['cs.CL']
Learning from preference feedback has emerged as an essential step for improving the generation quality and performance of modern language models (LMs). Despite its widespread use, the way preference-based learning is applied varies wildly, with differing data, learning algorithms, and evaluations used, making disentan...
2024-06-13T16:17:21Z
Neurips 2024 camera-ready
null
null
Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback
['Hamish Ivison', 'Yizhong Wang', 'Jiacheng Liu', 'Zeqiu Wu', 'Valentina Pyatkin', 'Nathan Lambert', 'Noah A. Smith', 'Yejin Choi', 'Hanna Hajishirzi']
2,024
Neural Information Processing Systems
64
55
['Computer Science']
2,406.09282
On the Effects of Heterogeneous Data Sources on Speech-to-Text Foundation Models
['Jinchuan Tian', 'Yifan Peng', 'William Chen', 'Kwanghee Choi', 'Karen Livescu', 'Shinji Watanabe']
['cs.CL', 'cs.SD', 'eess.AS']
The Open Whisper-style Speech Model (OWSM) series was introduced to achieve full transparency in building advanced speech-to-text (S2T) foundation models. To this end, OWSM models are trained on 25 public speech datasets, which are heterogeneous in multiple ways. In this study, we advance the OWSM series by introducing...
2024-06-13T16:22:37Z
null
null
null
null
null
null
null
null
null
null
2,406.09293
StableMaterials: Enhancing Diversity in Material Generation via Semi-Supervised Learning
['Giuseppe Vecchio']
['cs.CV', 'cs.GR']
We introduce StableMaterials, a novel approach for generating photorealistic physical-based rendering (PBR) materials that integrate semi-supervised learning with Latent Diffusion Models (LDMs). Our method employs adversarial training to distill knowledge from existing large-scale image generation models, minimizing th...
2024-06-13T16:29:46Z
null
null
null
null
null
null
null
null
null
null
2,406.09326
PianoMotion10M: Dataset and Benchmark for Hand Motion Generation in Piano Performance
['Qijun Gan', 'Song Wang', 'Shengtao Wu', 'Jianke Zhu']
['cs.SD', 'cs.AI', 'cs.CV', 'cs.MM', 'eess.AS']
Recently, artificial intelligence techniques for education have been received increasing attentions, while it still remains an open problem to design the effective music instrument instructing systems. Although key presses can be directly derived from sheet music, the transitional movements among key presses require mo...
2024-06-13T17:05:23Z
ICLR 2025 Spotlight
null
null
PianoMotion10M: Dataset and Benchmark for Hand Motion Generation in Piano Performance
['Qijun Gan', 'Song Wang', 'Shengtao Wu', 'Jianke Zhu']
2,024
International Conference on Learning Representations
1
92
['Computer Science', 'Engineering']
2,406.09367
Needle In A Video Haystack: A Scalable Synthetic Evaluator for Video MLLMs
['Zijia Zhao', 'Haoyu Lu', 'Yuqi Huo', 'Yifan Du', 'Tongtian Yue', 'Longteng Guo', 'Bingning Wang', 'Weipeng Chen', 'Jing Liu']
['cs.CV']
Video understanding is a crucial next step for multimodal large language models (MLLMs). Various benchmarks are introduced for better evaluating the MLLMs. Nevertheless, current video benchmarks are still inefficient for evaluating video models during iterative development due to the high cost of constructing datasets ...
2024-06-13T17:50:05Z
ICLR 2025
null
null
Needle In A Video Haystack: A Scalable Synthetic Evaluator for Video MLLMs
['Zijia Zhao', 'Haoyu Lu', 'Yuqi Huo', 'Yifan Du', 'Tongtian Yue', 'Longteng Guo', 'Bingning Wang', 'Weipeng Chen', 'Jing Liu']
2,024
International Conference on Learning Representations
5
55
['Computer Science']
2,406.09396
Too Many Frames, Not All Useful: Efficient Strategies for Long-Form Video QA
['Jongwoo Park', 'Kanchana Ranasinghe', 'Kumara Kahatapitiya', 'Wonjeong Ryu', 'Donghyun Kim', 'Michael S. Ryoo']
['cs.CV']
Long-form videos that span across wide temporal intervals are highly information redundant and contain multiple distinct events or entities that are often loosely related. Therefore, when performing long-form video question answering (LVQA), all information necessary to generate a correct response can often be containe...
2024-06-13T17:59:16Z
null
null
null
null
null
null
null
null
null
null
2,406.09406
4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities
['Roman Bachmann', 'Oğuzhan Fatih Kar', 'David Mizrahi', 'Ali Garjani', 'Mingfei Gao', 'David Griffiths', 'Jiaming Hu', 'Afshin Dehghan', 'Amir Zamir']
['cs.CV', 'cs.AI', 'cs.LG']
Current multimodal and multitask foundation models like 4M or UnifiedIO show promising results, but in practice their out-of-the-box abilities to accept diverse inputs and perform diverse tasks are limited by the (usually rather small) number of modalities and tasks they are trained on. In this paper, we expand upon th...
2024-06-13T17:59:42Z
Project page at 4m.epfl.ch
null
null
null
null
null
null
null
null
null
2,406.09412
Explore the Limits of Omni-modal Pretraining at Scale
['Yiyuan Zhang', 'Handong Li', 'Jing Liu', 'Xiangyu Yue']
['cs.CV', 'cs.AI', 'cs.LG', 'cs.MM']
We propose to build omni-modal intelligence, which is capable of understanding any modality and learning universal representations. In specific, we propose a scalable pretraining paradigm, named Multimodal Context (MiCo), which can scale up the numbers of modalities and amount of data, together with the model parameter...
2024-06-13T17:59:53Z
Project Website: https://invictus717.github.io/MiCo/
null
null
null
null
null
null
null
null
null
2,406.09413
Interpreting the Weight Space of Customized Diffusion Models
['Amil Dravid', 'Yossi Gandelsman', 'Kuan-Chieh Wang', 'Rameen Abdal', 'Gordon Wetzstein', 'Alexei A. Efros', 'Kfir Aberman']
['cs.CV', 'cs.GR', 'cs.LG']
We investigate the space of weights spanned by a large collection of customized diffusion models. We populate this space by creating a dataset of over 60,000 models, each of which is a base model fine-tuned to insert a different person's visual identity. We model the underlying manifold of these weights as a subspace, ...
2024-06-13T17:59:56Z
Project Page: https://snap-research.github.io/weights2weights
null
null
Interpreting the Weight Space of Customized Diffusion Models
['Amil Dravid', 'Yossi Gandelsman', 'Kuan-Chieh Jackson Wang', 'Rameen Abdal', 'Gordon Wetzstein', 'Alexei A. Efros', 'Kfir Aberman']
2,024
Neural Information Processing Systems
12
80
['Computer Science']
2,406.09414
Depth Anything V2
['Lihe Yang', 'Bingyi Kang', 'Zilong Huang', 'Zhen Zhao', 'Xiaogang Xu', 'Jiashi Feng', 'Hengshuang Zhao']
['cs.CV']
This work presents Depth Anything V2. Without pursuing fancy techniques, we aim to reveal crucial findings to pave the way towards building a powerful monocular depth estimation model. Notably, compared with V1, this version produces much finer and more robust depth predictions through three key practices: 1) replacing...
2024-06-13T17:59:56Z
Accepted by NeurIPS 2024. Project page: https://depth-anything-v2.github.io
null
null
null
null
null
null
null
null
null
2,406.09418
VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding
['Muhammad Maaz', 'Hanoona Rasheed', 'Salman Khan', 'Fahad Khan']
['cs.CV']
Building on the advances of language models, Large Multimodal Models (LMMs) have contributed significant improvements in video understanding. While the current video LMMs utilize advanced Large Language Models (LLMs), they rely on either image or video encoders to process visual inputs, each of which has its own limita...
2024-06-13T17:59:59Z
Technical Report
null
null
null
null
null
null
null
null
null
2,406.09455
Pandora: Towards General World Model with Natural Language Actions and Video States
['Jiannan Xiang', 'Guangyi Liu', 'Yi Gu', 'Qiyue Gao', 'Yuting Ning', 'Yuheng Zha', 'Zeyu Feng', 'Tianhua Tao', 'Shibo Hao', 'Yemin Shi', 'Zhengzhong Liu', 'Eric P. Xing', 'Zhiting Hu']
['cs.CV', 'cs.AI', 'cs.CL']
World models simulate future states of the world in response to different actions. They facilitate interactive content creation and provides a foundation for grounded, long-horizon reasoning. Current foundation models do not fully meet the capabilities of general world models: large language models (LLMs) are constrain...
2024-06-12T18:55:51Z
Website: https://world-model.maitrix.org/
null
null
null
null
null
null
null
null
null
2,406.0949
Newswire: A Large-Scale Structured Database of a Century of Historical News
['Emily Silcock', 'Abhishek Arora', "Luca D'Amico-Wong", 'Melissa Dell']
['cs.CL', 'econ.GN', 'q-fin.EC']
In the U.S. historically, local newspapers drew their content largely from newswires like the Associated Press. Historians argue that newswires played a pivotal role in creating a national identity and shared understanding of the world, but there is no comprehensive archive of the content sent over newswires. We recons...
2024-06-13T16:20:05Z
arXiv admin note: text overlap with arXiv:2306.17810, arXiv:2308.12477
null
null
null
null
null
null
null
null
null
2,406.09627
RobustSAM: Segment Anything Robustly on Degraded Images
['Wei-Ting Chen', 'Yu-Jiet Vong', 'Sy-Yen Kuo', 'Sizhuo Ma', 'Jian Wang']
['cs.CV', 'cs.AI', 'eess.IV']
Segment Anything Model (SAM) has emerged as a transformative approach in image segmentation, acclaimed for its robust zero-shot segmentation capabilities and flexible prompting system. Nonetheless, its performance is challenged by images with degraded quality. Addressing this limitation, we propose the Robust Segment A...
2024-06-13T23:33:59Z
Accepted by CVPR2024 (Highlight); Project Page: https://robustsam.github.io/
null
null
null
null
null
null
null
null
null
2,406.09756
Grounding Image Matching in 3D with MASt3R
['Vincent Leroy', 'Yohann Cabon', 'Jérôme Revaud']
['cs.CV']
Image Matching is a core component of all best-performing algorithms and pipelines in 3D vision. Yet despite matching being fundamentally a 3D problem, intrinsically linked to camera pose and scene geometry, it is typically treated as a 2D problem. This makes sense as the goal of matching is to establish correspondence...
2024-06-14T06:46:30Z
null
null
null
Grounding Image Matching in 3D with MASt3R
['Vincent Leroy', 'Yohann Cabon', 'Jérôme Revaud']
2,024
European Conference on Computer Vision
164
114
['Computer Science']
2,406.0976
Bootstrapping Language Models with DPO Implicit Rewards
['Changyu Chen', 'Zichen Liu', 'Chao Du', 'Tianyu Pang', 'Qian Liu', 'Arunesh Sinha', 'Pradeep Varakantham', 'Min Lin']
['cs.CL', 'cs.LG']
Human alignment in large language models (LLMs) is an active area of research. A recent groundbreaking work, direct preference optimization (DPO), has greatly simplified the process from past work in reinforcement learning from human feedback (RLHF) by bypassing the reward learning stage in RLHF. DPO, after training, p...
2024-06-14T06:57:18Z
Accepted in ICLR 2025
null
null
Bootstrapping Language Models with DPO Implicit Rewards
['Changyu Chen', 'Zi-Yan Liu', 'Chao Du', 'Tianyu Pang', 'Qian Liu', 'Arunesh Sinha', 'Pradeep Varakantham', 'Min Lin']
2,024
International Conference on Learning Representations
27
43
['Computer Science']
2,406.09788
OpenCapBench: A Benchmark to Bridge Pose Estimation and Biomechanics
['Yoni Gozlan', 'Antoine Falisse', 'Scott Uhlrich', 'Anthony Gatti', 'Michael Black', 'Akshay Chaudhari']
['cs.CV']
Pose estimation has promised to impact healthcare by enabling more practical methods to quantify nuances of human movement and biomechanics. However, despite the inherent connection between pose estimation and biomechanics, these disciplines have largely remained disparate. For example, most current pose estimation ben...
2024-06-14T07:37:28Z
null
null
null
null
null
null
null
null
null
null
2,406.099
GEB-1.3B: Open Lightweight Large Language Model
['Jie Wu', 'Yufeng Zhu', 'Lei Shen', 'Xuqing Lu']
['cs.CL']
Recently developed large language models (LLMs) such as ChatGPT, Claude, and Llama have demonstrated impressive abilities, and even surpass human-level performance in several tasks. Despite their success, the resource-intensive demands of these models, requiring significant computational power for both training and inf...
2024-06-14T10:15:49Z
GEB-1.3B technical report
null
null
null
null
null
null
null
null
null
2,406.09904
QQQ: Quality Quattuor-Bit Quantization for Large Language Models
['Ying Zhang', 'Peng Zhang', 'Mincong Huang', 'Jingyang Xiang', 'Yujie Wang', 'Chao Wang', 'Yineng Zhang', 'Lei Yu', 'Chuan Liu', 'Wei Lin']
['cs.LG']
Quantization is a proven effective method for compressing large language models. Although popular techniques like W8A8 and W4A16 effectively maintain model performance, they often fail to concurrently speed up the prefill and decoding stages of inference. W4A8 is a promising strategy to accelerate both of them while us...
2024-06-14T10:23:45Z
null
null
null
QQQ: Quality Quattuor-Bit Quantization for Large Language Models
['Ying Zhang', 'Peng Zhang', 'Mincong Huang', 'Jingyang Xiang', 'Yujie Wang', 'Chao Wang', 'Yineng Zhang', 'Lei Yu', 'Chuan Liu', 'Wei Lin']
2,024
arXiv.org
6
26
['Computer Science']
2,406.09913
OpenECAD: An Efficient Visual Language Model for Editable 3D-CAD Design
['Zhe Yuan', 'Jianqi Shi', 'Yanhong Huang']
['cs.CV']
Computer-aided design (CAD) tools are utilized in the manufacturing industry for modeling everything from cups to spacecraft. These programs are complex to use and typically require years of training and experience to master. Structured and well-constrained 2D sketches and 3D constructions are crucial components of CAD...
2024-06-14T10:47:52Z
null
Computers & Graphics 124C (2024) 104048
10.1016/j.cag.2024.104048
null
null
null
null
null
null
null
2,406.09952
BiVLC: Extending Vision-Language Compositionality Evaluation with Text-to-Image Retrieval
['Imanol Miranda', 'Ander Salaberria', 'Eneko Agirre', 'Gorka Azkune']
['cs.CV', 'cs.CL', 'cs.LG']
Existing Vision-Language Compositionality (VLC) benchmarks like SugarCrepe are formulated as image-to-text retrieval problems, where, given an image, the models need to select between the correct textual description and a synthetic hard negative text. In this work, we present the Bidirectional Vision-Language Compositi...
2024-06-14T11:58:49Z
Accepted to NeurIPS 24 Datasets and Benchmarks Track; Project page at: https://imirandam.github.io/BiVLC_project_page/
null
null
BiVLC: Extending Vision-Language Compositionality Evaluation with Text-to-Image Retrieval
['Imanol Miranda', 'Ander Salaberria', 'Eneko Agirre', 'Gorka Azkune']
2,024
Neural Information Processing Systems
2
31
['Computer Science']
2,406.10099
Know the Unknown: An Uncertainty-Sensitive Method for LLM Instruction Tuning
['Jiaqi Li', 'Yixuan Tang', 'Yi Yang']
['cs.CL']
Large language models (LLMs) demonstrate remarkable capabilities but face challenges from hallucinations, which typically arise from insufficient knowledge or context. While instructing LLMs to acknowledge knowledge limitations by responding with "I don't know" appears promising, we find that models consistently strugg...
2024-06-14T14:56:04Z
null
null
null
Know the Unknown: An Uncertainty-Sensitive Method for LLM Instruction Tuning
['Jiaqi Li', 'Yixuan Tang', 'Yi Yang']
2,024
arXiv.org
8
54
['Computer Science']
2,406.10118
SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages
['Holy Lovenia', 'Rahmad Mahendra', 'Salsabil Maulana Akbar', 'Lester James V. Miranda', 'Jennifer Santoso', 'Elyanah Aco', 'Akhdan Fadhilah', 'Jonibek Mansurov', 'Joseph Marvin Imperial', 'Onno P. Kampman', 'Joel Ruben Antony Moniz', 'Muhammad Ravi Shulthan Habibi', 'Frederikus Hudi', 'Railey Montalan', 'Ryan Ignatius...
['cs.CL']
Southeast Asia (SEA) is a region rich in linguistic diversity and cultural variety, with over 1,300 indigenous languages and a population of 671 million people. However, prevailing AI models suffer from a significant lack of representation of texts, images, and audio datasets from SEA, compromising the quality of AI mo...
2024-06-14T15:23:39Z
https://seacrowd.github.io/ Published in EMNLP 2024
null
null
SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages
['Holy Lovenia', 'Rahmad Mahendra', 'Salsabil Maulana Akbar', 'Lester James Validad Miranda', 'Jennifer Santoso', 'Elyanah Aco', 'Akhdan Fadhilah', 'Jonibek Mansurov', 'Joseph Marvin Imperial', 'Onno P. Kampman', 'Joel Ruben Antony Moniz', 'Muhammad Ravi Shulthan Habibi', 'Frederikus Hudi', 'Railey Montalan', 'Ryan Ign...
2,024
Conference on Empirical Methods in Natural Language Processing
14
143
['Computer Science']
2,406.10163
MeshAnything: Artist-Created Mesh Generation with Autoregressive Transformers
['Yiwen Chen', 'Tong He', 'Di Huang', 'Weicai Ye', 'Sijin Chen', 'Jiaxiang Tang', 'Xin Chen', 'Zhongang Cai', 'Lei Yang', 'Gang Yu', 'Guosheng Lin', 'Chi Zhang']
['cs.CV', 'cs.AI']
Recently, 3D assets created via reconstruction and generation have matched the quality of manually crafted assets, highlighting their potential for replacement. However, this potential is largely unrealized because these assets always need to be converted to meshes for 3D industry applications, and the meshes produced ...
2024-06-14T16:30:25Z
Project Page: https://buaacyw.github.io/mesh-anything/ Code: https://github.com/buaacyw/MeshAnything
null
null
null
null
null
null
null
null
null
2,406.10173
IntentionQA: A Benchmark for Evaluating Purchase Intention Comprehension Abilities of Language Models in E-commerce
['Wenxuan Ding', 'Weiqi Wang', 'Sze Heng Douglas Kwok', 'Minghao Liu', 'Tianqing Fang', 'Jiaxin Bai', 'Xin Liu', 'Changlong Yu', 'Zheng Li', 'Chen Luo', 'Qingyu Yin', 'Bing Yin', 'Junxian He', 'Yangqiu Song']
['cs.CL']
Enhancing Language Models' (LMs) ability to understand purchase intentions in E-commerce scenarios is crucial for their effective assistance in various downstream tasks. However, previous approaches that distill intentions from LMs often fail to generate meaningful and human-centric intentions applicable in real-world ...
2024-06-14T16:51:21Z
Findings of EMNLP 2024
null
null
IntentionQA: A Benchmark for Evaluating Purchase Intention Comprehension Abilities of Language Models in E-commerce
['Wenxuan Ding', 'Weiqi Wang', 'Sze Heng Douglas Kwok', 'Minghao Liu', 'Tianqing Fang', 'Jiaxin Bai', 'Junxian He', 'Yangqiu Song']
2,024
Conference on Empirical Methods in Natural Language Processing
8
68
['Computer Science']
2,406.10208
Glyph-ByT5-v2: A Strong Aesthetic Baseline for Accurate Multilingual Visual Text Rendering
['Zeyu Liu', 'Weicong Liang', 'Yiming Zhao', 'Bohan Chen', 'Lin Liang', 'Lijuan Wang', 'Ji Li', 'Yuhui Yuan']
['cs.CV']
Recently, Glyph-ByT5 has achieved highly accurate visual text rendering performance in graphic design images. However, it still focuses solely on English and performs relatively poorly in terms of visual appeal. In this work, we address these two fundamental limitations by presenting Glyph-ByT5-v2 and Glyph-SDXL-v2, wh...
2024-06-14T17:44:09Z
Project page: https://glyph-byt5-v2.github.io/
null
null
null
null
null
null
null
null
null
2,406.10209
Be like a Goldfish, Don't Memorize! Mitigating Memorization in Generative LLMs
['Abhimanyu Hans', 'Yuxin Wen', 'Neel Jain', 'John Kirchenbauer', 'Hamid Kazemi', 'Prajwal Singhania', 'Siddharth Singh', 'Gowthami Somepalli', 'Jonas Geiping', 'Abhinav Bhatele', 'Tom Goldstein']
['cs.CL']
Large language models can memorize and repeat their training data, causing privacy and copyright risks. To mitigate memorization, we introduce a subtle modification to the next-token training objective that we call the goldfish loss. During training, randomly sampled subsets of tokens are excluded from the loss computa...
2024-06-14T17:44:22Z
10 pages, 8 figures, and 1 table in the main body. Code available at https://github.com/ahans30/goldfish-loss and checkpoints at https://huggingface.co/collections/tomg-group-umd/goldfish-loss-mitigating-memorization-in-llms-66c175becb6aab07744f7272
null
null
null
null
null
null
null
null
null
2,406.10216
Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs
['Rui Yang', 'Ruomeng Ding', 'Yong Lin', 'Huan Zhang', 'Tong Zhang']
['cs.CL', 'cs.AI']
Reward models trained on human preference data have been proven to effectively align Large Language Models (LLMs) with human intent within the framework of reinforcement learning from human feedback (RLHF). However, current reward models have limited generalization capabilities to unseen prompts and responses, which ca...
2024-06-14T17:49:59Z
NeurIPS 2024
null
null
null
null
null
null
null
null
null
2,406.10224
EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models
['Julian Straub', 'Daniel DeTone', 'Tianwei Shen', 'Nan Yang', 'Chris Sweeney', 'Richard Newcombe']
['cs.CV']
The advent of wearable computers enables a new source of context for AI that is embedded in egocentric sensor data. This new egocentric data comes equipped with fine-grained 3D location information and thus presents the opportunity for a novel class of spatial foundation models that are rooted in 3D space. To measure p...
2024-06-14T17:57:35Z
null
null
null
EFM3D: A Benchmark for Measuring Progress Towards 3D Egocentric Foundation Models
['Julian Straub', 'Daniel DeTone', 'Tianwei Shen', 'Nan Yang', 'Chris Sweeney', 'Richard A. Newcombe']
2,024
arXiv.org
9
57
['Computer Science']
2,406.10258
Curating Grounded Synthetic Data with Global Perspectives for Equitable AI
['Elin Törnquist', 'Robert Alexander Caulk']
['cs.CL', 'I.2.7']
The development of robust AI models relies heavily on the quality and variety of training data available. In fields where data scarcity is prevalent, synthetic data generation offers a vital solution. In this paper, we introduce a novel approach to creating synthetic datasets, grounded in real-world diversity and enric...
2024-06-10T17:59:11Z
null
null
null
null
null
null
null
null
null
null
2,406.10324
L4GM: Large 4D Gaussian Reconstruction Model
['Jiawei Ren', 'Kevin Xie', 'Ashkan Mirzaei', 'Hanxue Liang', 'Xiaohui Zeng', 'Karsten Kreis', 'Ziwei Liu', 'Antonio Torralba', 'Sanja Fidler', 'Seung Wook Kim', 'Huan Ling']
['cs.CV', 'cs.LG']
We present L4GM, the first 4D Large Reconstruction Model that produces animated objects from a single-view video input -- in a single feed-forward pass that takes only a second. Key to our success is a novel dataset of multiview videos containing curated, rendered animated objects from Objaverse. This dataset depicts 4...
2024-06-14T17:51:18Z
Project page: https://research.nvidia.com/labs/toronto-ai/l4gm
null
null
L4GM: Large 4D Gaussian Reconstruction Model
['Jiawei Ren', 'Kevin Xie', 'Ashkan Mirzaei', 'Hanxue Liang', 'Xiaohui Zeng', 'Karsten Kreis', 'Ziwei Liu', 'Antonio Torralba', 'Sanja Fidler', 'Seung Wook Kim', 'Huan Ling']
2,024
Neural Information Processing Systems
45
72
['Computer Science']
2,406.10328
From Pixels to Prose: A Large Dataset of Dense Image Captions
['Vasu Singla', 'Kaiyu Yue', 'Sukriti Paul', 'Reza Shirkavand', 'Mayuka Jayawardhana', 'Alireza Ganjdanesh', 'Heng Huang', 'Abhinav Bhatele', 'Gowthami Somepalli', 'Tom Goldstein']
['cs.CV', 'cs.CL', 'cs.LG']
Training large vision-language models requires extensive, high-quality image-text pairs. Existing web-scraped datasets, however, are noisy and lack detailed image descriptions. To bridge this gap, we introduce PixelProse, a comprehensive dataset of over 16M (million) synthetically generated captions, leveraging cutting...
2024-06-14T17:59:53Z
pixelprose 16M dataset
null
null
null
null
null
null
null
null
null
2,406.10429
Consistency-diversity-realism Pareto fronts of conditional image generative models
['Pietro Astolfi', 'Marlene Careil', 'Melissa Hall', 'Oscar Mañas', 'Matthew Muckley', 'Jakob Verbeek', 'Adriana Romero Soriano', 'Michal Drozdzal']
['cs.CV', 'cs.AI']
Building world models that accurately and comprehensively represent the real world is the utmost aspiration for conditional image generative models as it would enable their use as world simulators. For these models to be successful world models, they should not only excel at image quality and prompt-image consistency b...
2024-06-14T22:14:11Z
null
null
null
null
null
null
null
null
null
null
2,406.10454
HumanPlus: Humanoid Shadowing and Imitation from Humans
['Zipeng Fu', 'Qingqing Zhao', 'Qi Wu', 'Gordon Wetzstein', 'Chelsea Finn']
['cs.RO', 'cs.AI', 'cs.CV', 'cs.LG', 'cs.SY', 'eess.SY']
One of the key arguments for building robots that have similar form factors to human beings is that we can leverage the massive human data for training. Yet, doing so has remained challenging in practice due to the complexities in humanoid perception and control, lingering physical gaps between humanoids and humans in ...
2024-06-15T00:41:34Z
project website: https://humanoid-ai.github.io/
null
null
null
null
null
null
null
null
null
2,406.10601
The Devil is in the Details: StyleFeatureEditor for Detail-Rich StyleGAN Inversion and High Quality Image Editing
['Denis Bobkov', 'Vadim Titov', 'Aibek Alanov', 'Dmitry Vetrov']
['cs.CV']
The task of manipulating real image attributes through StyleGAN inversion has been extensively researched. This process involves searching latent variables from a well-trained StyleGAN generator that can synthesize a real image, modifying these latent variables, and then synthesizing an image with the desired edits. A ...
2024-06-15T11:28:32Z
Accepted to CVPR 2024
null
null
null
null
null
null
null
null
null
2,406.10638
Unveiling the Ignorance of MLLMs: Seeing Clearly, Answering Incorrectly
['Yexin Liu', 'Zhengyang Liang', 'Yueze Wang', 'Xianfeng Wu', 'Feilong Tang', 'Muyang He', 'Jian Li', 'Zheng Liu', 'Harry Yang', 'Sernam Lim', 'Bo Zhao']
['cs.CV']
Multimodal Large Language Models (MLLMs) have displayed remarkable performance in multi-modal tasks, particularly in visual comprehension. However, we reveal that MLLMs often generate incorrect answers even when they understand the visual content. To this end, we manually construct a benchmark with 12 categories and de...
2024-06-15T13:58:26Z
null
null
null
Unveiling the Ignorance of MLLMs: Seeing Clearly, Answering Incorrectly
['Yexin Liu', 'Zhengyang Liang', 'Yueze Wang', 'Xianfeng Wu', 'Feilong Tang', 'Muyang He', 'Jian Li', 'Zheng Liu', 'Harry Yang', 'Ser-Nam Lim', 'Bo Zhao']
2,024
Computer Vision and Pattern Recognition
7
98
['Computer Science']
2,406.10721
RoboPoint: A Vision-Language Model for Spatial Affordance Prediction for Robotics
['Wentao Yuan', 'Jiafei Duan', 'Valts Blukis', 'Wilbert Pumacay', 'Ranjay Krishna', 'Adithyavairavan Murali', 'Arsalan Mousavian', 'Dieter Fox']
['cs.RO', 'cs.AI', 'cs.CV']
From rearranging objects on a table to putting groceries into shelves, robots must plan precise action points to perform tasks accurately and reliably. In spite of the recent adoption of vision language models (VLMs) to control robot behavior, VLMs struggle to precisely articulate robot actions using language. We intro...
2024-06-15T19:22:51Z
null
null
null
null
null
null
null
null
null
null
2,406.10735
How Should We Extract Discrete Audio Tokens from Self-Supervised Models?
['Pooneh Mousavi', 'Jarod Duret', 'Salah Zaiem', 'Luca Della Libera', 'Artem Ploujnikov', 'Cem Subakan', 'Mirco Ravanelli']
['cs.SD', 'cs.AI', 'cs.CL', 'eess.AS']
Discrete audio tokens have recently gained attention for their potential to bridge the gap between audio and language processing. Ideal audio tokens must preserve content, paralinguistic elements, speaker identity, and many other audio details. Current audio tokenization methods fall into two categories: Semantic token...
2024-06-15T20:43:07Z
4 pages, 2 figures, 2 tables, Accepted at Interspeech 2024
null
null
How Should We Extract Discrete Audio Tokens from Self-Supervised Models?
['Pooneh Mousavi', 'J. Duret', 'Salah Zaiem', 'Luca Della Libera', 'Artem Ploujnikov', 'Cem Subakan', 'M. Ravanelli']
2,024
Interspeech
15
45
['Computer Science', 'Engineering']
2,406.10806
ptt5-v2: A Closer Look at Continued Pretraining of T5 Models for the Portuguese Language
['Marcos Piau', 'Roberto Lotufo', 'Rodrigo Nogueira']
['cs.CL', 'cs.AI', 'cs.IR']
Despite advancements in Natural Language Processing (NLP) and the growing availability of pretrained models, the English language remains the primary focus of model development. Continued pretraining on language-specific corpora provides a practical solution for adapting models to other languages. However, the impact o...
2024-06-16T05:17:56Z
null
null
null
null
null
null
null
null
null
null
2,406.10819
GUI-World: A Video Benchmark and Dataset for Multimodal GUI-oriented Understanding
['Dongping Chen', 'Yue Huang', 'Siyuan Wu', 'Jingyu Tang', 'Liuyi Chen', 'Yilin Bai', 'Zhigang He', 'Chenlong Wang', 'Huichi Zhou', 'Yiqiang Li', 'Tianshuo Zhou', 'Yue Yu', 'Chujie Gao', 'Qihui Zhang', 'Yi Gui', 'Zhen Li', 'Yao Wan', 'Pan Zhou', 'Jianfeng Gao', 'Lichao Sun']
['cs.CV', 'cs.AI', 'cs.CL']
Recently, Multimodal Large Language Models (MLLMs) have been used as agents to control keyboard and mouse inputs by directly perceiving the Graphical User Interface (GUI) and generating corresponding commands. However, current agents primarily demonstrate strong understanding capabilities in static environments and are...
2024-06-16T06:56:53Z
Accepted by ICLR 2025
null
null
null
null
null
null
null
null
null
2,406.10858
Step-level Value Preference Optimization for Mathematical Reasoning
['Guoxin Chen', 'Minpeng Liao', 'Chengxi Li', 'Kai Fan']
['cs.CL', 'cs.AI']
Direct Preference Optimization (DPO) using an implicit reward model has proven to be an effective alternative to reinforcement learning from human feedback (RLHF) for fine-tuning preference aligned large language models (LLMs). However, the overall preference annotations of responses do not fully capture the fine-grain...
2024-06-16T09:06:17Z
Camera ready version for EMNLP2024-Findings
null
null
Step-level Value Preference Optimization for Mathematical Reasoning
['Guoxin Chen', 'Minpeng Liao', 'Chengxi Li', 'Kai Fan']
2,024
Conference on Empirical Methods in Natural Language Processing
42
42
['Computer Science']
2,406.1097
Joint Audio and Symbolic Conditioning for Temporally Controlled Text-to-Music Generation
['Or Tal', 'Alon Ziv', 'Itai Gat', 'Felix Kreuk', 'Yossi Adi']
['cs.SD', 'eess.AS']
We present JASCO, a temporally controlled text-to-music generation model utilizing both symbolic and audio-based conditions. JASCO can generate high-quality music samples conditioned on global text descriptions along with fine-grained local controls. JASCO is based on the Flow Matching modeling paradigm together with a...
2024-06-16T15:06:06Z
null
null
null
null
null
null
null
null
null
null
2,406.11037
NAST: Noise Aware Speech Tokenization for Speech Language Models
['Shoval Messica', 'Yossi Adi']
['cs.SD', 'eess.AS']
Speech tokenization is the task of representing speech signals as a sequence of discrete units. Such representations can be later used for various downstream tasks including automatic speech recognition, text-to-speech, etc. More relevant to this study, such representation serves as the basis of Speech Language Models....
2024-06-16T18:20:45Z
Accepted at Interspeech 2024
null
null
null
null
null
null
null
null
null
2,406.11192
Beyond Boundaries: Learning a Universal Entity Taxonomy across Datasets and Languages for Open Named Entity Recognition
['Yuming Yang', 'Wantong Zhao', 'Caishuang Huang', 'Junjie Ye', 'Xiao Wang', 'Huiyuan Zheng', 'Yang Nan', 'Yuran Wang', 'Xueying Xu', 'Kaixin Huang', 'Yunke Zhang', 'Tao Gui', 'Qi Zhang', 'Xuanjing Huang']
['cs.CL']
Open Named Entity Recognition (NER), which involves identifying arbitrary types of entities from arbitrary domains, remains challenging for Large Language Models (LLMs). Recent studies suggest that fine-tuning LLMs on extensive NER data can boost their performance. However, training directly on existing datasets neglec...
2024-06-17T03:57:35Z
Accepted at COLING 2025. Camera-ready version updated. Project page: https://github.com/UmeanNever/B2NER
Proceedings of the 31st International Conference on Computational Linguistics (2025) 10902-10923
null
null
null
null
null
null
null
null
2,406.11251
Unifying Multimodal Retrieval via Document Screenshot Embedding
['Xueguang Ma', 'Sheng-Chieh Lin', 'Minghan Li', 'Wenhu Chen', 'Jimmy Lin']
['cs.IR']
In the real world, documents are organized in different formats and varied modalities. Traditional retrieval pipelines require tailored document parsing techniques and content extraction modules to prepare input for indexing. This process is tedious, prone to errors, and has information loss. To this end, we propose Do...
2024-06-17T06:27:35Z
EMNLP2024 main
null
null
null
null
null
null
null
null
null
2,406.11317
GUICourse: From General Vision Language Models to Versatile GUI Agents
['Wentong Chen', 'Junbo Cui', 'Jinyi Hu', 'Yujia Qin', 'Junjie Fang', 'Yue Zhao', 'Chongyi Wang', 'Jun Liu', 'Guirong Chen', 'Yupeng Huo', 'Yuan Yao', 'Yankai Lin', 'Zhiyuan Liu', 'Maosong Sun']
['cs.AI', 'cs.CL', 'cs.CV', 'cs.HC']
Utilizing Graphic User Interface (GUI) for human-computer interaction is essential for accessing a wide range of digital tools. Recent advancements in Vision Language Models (VLMs) highlight the compelling potential to develop versatile agents to help humans finish GUI navigation tasks. However, current VLMs are challe...
2024-06-17T08:30:55Z
null
null
null
null
null
null
null
null
null
null