arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,406.11357 | Refiner: Restructure Retrieval Content Efficiently to Advance
Question-Answering Capabilities | ['Zhonghao Li', 'Xuming Hu', 'Aiwei Liu', 'Kening Zheng', 'Sirui Huang', 'Hui Xiong'] | ['cs.CL', 'cs.AI', 'cs.HC', 'cs.IR', 'cs.MA'] | Large Language Models (LLMs) are limited by their parametric knowledge,
leading to hallucinations in knowledge-extensive tasks. To address this,
Retrieval-Augmented Generation (RAG) incorporates external document chunks to
expand LLM knowledge. Furthermore, compressing information from document chunks
through extractio... | 2024-06-17T09:25:10Z | 8 pages | null | 10.18653/v1/2024.findings-emnlp.500 | null | null | null | null | null | null | null |
2,406.11385 | MetaGPT: Merging Large Language Models Using Model Exclusive Task
Arithmetic | ['Yuyan Zhou', 'Liang Song', 'Bingning Wang', 'Weipeng Chen'] | ['cs.CL'] | The advent of large language models (LLMs) like GPT-4 has catalyzed the
exploration of multi-task learning (MTL), in which a single model demonstrates
proficiency across diverse tasks. Task arithmetic has emerged as a
cost-effective approach for MTL. It enables performance enhancement across
multiple tasks by adding th... | 2024-06-17T10:12:45Z | 19 pages | null | null | null | null | null | null | null | null | null |
2,406.1141 | HARE: HumAn pRiors, a key to small language model Efficiency | ['Lingyun Zhang', 'Bin jin', 'Gaojian Ge', 'Lunhui Liu', 'Xuewen Shen', 'Mingyong Wu', 'Houqian Zhang', 'Yongneng Jiang', 'Shiqi Chen', 'Shi Pu'] | ['cs.CL', 'cs.AI'] | Human priors play a crucial role in efficiently utilizing data in deep
learning. However, with the development of large language models (LLMs), there
is an increasing emphasis on scaling both model size and data volume, which
often diminishes the importance of human priors in data construction.
Influenced by these tren... | 2024-06-17T10:56:03Z | null | null | null | HARE: HumAn pRiors, a key to small language model Efficiency | ['Lingyun Zhang', 'Bin jin', 'Gaojian Ge', 'Lunhui Liu', 'Xuewen Shen', 'Mingyong Wu', 'Houqian Zhang', 'Yongneng Jiang', 'Shiqi Chen', 'Shi Pu'] | 2,024 | arXiv.org | 0 | 28 | ['Computer Science'] |
2,406.11477 | How Can We Effectively Expand the Vocabulary of LLMs with 0.01GB of
Target Language Text? | ['Atsuki Yamaguchi', 'Aline Villavicencio', 'Nikolaos Aletras'] | ['cs.CL', 'cs.AI'] | Large language models (LLMs) have shown remarkable capabilities in many
languages beyond English. Yet, LLMs require more inference steps when
generating non-English text due to their reliance on English-centric tokenizers
and vocabulary, resulting in higher usage costs to non-English speakers.
Vocabulary expansion with... | 2024-06-17T12:42:34Z | null | null | null | How Can We Effectively Expand the Vocabulary of LLMs with 0.01GB of Target Language Text? | ['Atsuki Yamaguchi', 'Aline Villavicencio', 'Nikolaos Aletras'] | 2,024 | null | 10 | 54 | ['Computer Science'] |
2,406.11579 | Duoduo CLIP: Efficient 3D Understanding with Multi-View Images | ['Han-Hung Lee', 'Yiming Zhang', 'Angel X. Chang'] | ['cs.CV'] | We introduce Duoduo CLIP, a model for 3D representation learning that learns
shape encodings from multi-view images instead of point clouds. The choice of
multi-view images allows us to leverage 2D priors from off-the-shelf CLIP
models to facilitate fine-tuning with 3D data. Our approach not only shows
better generaliz... | 2024-06-17T14:16:12Z | ICLR 2025 | null | null | null | null | null | null | null | null | null |
2,406.11612 | Long Code Arena: a Set of Benchmarks for Long-Context Code Models | ['Egor Bogomolov', 'Aleksandra Eliseeva', 'Timur Galimzyanov', 'Evgeniy Glukhov', 'Anton Shapkin', 'Maria Tigina', 'Yaroslav Golubev', 'Alexander Kovrigin', 'Arie van Deursen', 'Maliheh Izadi', 'Timofey Bryksin'] | ['cs.LG', 'cs.AI', 'cs.IR', 'cs.SE'] | Nowadays, the fields of code and natural language processing are evolving
rapidly. In particular, models become better at processing long context windows
- supported context sizes have increased by orders of magnitude over the last
few years. However, there is a shortage of benchmarks for code processing that
go beyond... | 2024-06-17T14:58:29Z | 54 pages, 4 figures, 22 tables | null | null | Long Code Arena: a Set of Benchmarks for Long-Context Code Models | ['Egor Bogomolov', 'Aleksandra Eliseeva', 'Timur Galimzyanov', 'Evgeniy Glukhov', 'Anton Shapkin', 'Maria Tigina', 'Yaroslav Golubev', 'Alexander Kovrigin', 'A. Deursen', 'M. Izadi', 'T. Bryksin'] | 2,024 | arXiv.org | 23 | 0 | ['Computer Science'] |
2,406.11617 | DELLA-Merging: Reducing Interference in Model Merging through
Magnitude-Based Sampling | ['Pala Tej Deep', 'Rishabh Bhardwaj', 'Soujanya Poria'] | ['cs.CL'] | With the proliferation of domain-specific models, model merging has emerged
as a set of techniques that combine the capabilities of multiple models into
one that can multitask without the cost of additional training. In this paper,
we propose a new model merging technique, Drop and rEscaLe via sampLing with
mAgnitude (... | 2024-06-17T15:02:45Z | null | null | null | DELLA-Merging: Reducing Interference in Model Merging through Magnitude-Based Sampling | ['Pala Tej Deep', 'Rishabh Bhardwaj', 'Soujanya Poria'] | 2,024 | arXiv.org | 31 | 34 | ['Computer Science'] |
2,406.11633 | DocGenome: An Open Large-scale Scientific Document Benchmark for
Training and Testing Multi-modal Large Language Models | ['Renqiu Xia', 'Song Mao', 'Xiangchao Yan', 'Hongbin Zhou', 'Bo Zhang', 'Haoyang Peng', 'Jiahao Pi', 'Daocheng Fu', 'Wenjie Wu', 'Hancheng Ye', 'Shiyang Feng', 'Bin Wang', 'Chao Xu', 'Conghui He', 'Pinlong Cai', 'Min Dou', 'Botian Shi', 'Sheng Zhou', 'Yongwei Wang', 'Bin Wang', 'Junchi Yan', 'Fei Wu', 'Yu Qiao'] | ['cs.CV'] | Scientific documents record research findings and valuable human knowledge,
comprising a vast corpus of high-quality data. Leveraging multi-modality data
extracted from these documents and assessing large models' abilities to handle
scientific document-oriented tasks is therefore meaningful. Despite promising
advanceme... | 2024-06-17T15:13:52Z | Homepage of DocGenome:
https://unimodal4reasoning.github.io/DocGenome_page 22 pages, 11 figures | null | null | null | null | null | null | null | null | null |
2,406.11657 | Can LLM be a Personalized Judge? | ['Yijiang River Dong', 'Tiancheng Hu', 'Nigel Collier'] | ['cs.CL', 'cs.CY'] | Ensuring that large language models (LLMs) reflect diverse user values and
preferences is crucial as their user bases expand globally. It is therefore
encouraging to see the growing interest in LLM personalization within the
research community. However, current works often rely on the LLM-as-a-Judge
approach for evalua... | 2024-06-17T15:41:30Z | Our code is available at
https://github.com/dong-river/Personalized-Judge | null | null | null | null | null | null | null | null | null |
2,406.11665 | See It from My Perspective: How Language Affects Cultural Bias in Image
Understanding | ['Amith Ananthram', 'Elias Stengel-Eskin', 'Mohit Bansal', 'Kathleen McKeown'] | ['cs.CL', 'cs.AI', 'cs.CV'] | Vision-language models (VLMs) can respond to queries about images in many
languages. However, beyond language, culture affects how we see things. For
example, individuals from Western cultures focus more on the central figure in
an image while individuals from East Asian cultures attend more to scene
context. In this w... | 2024-06-17T15:49:51Z | Accepted at ICLR 2025. 22 pages, 6 figures. Code/models:
https://github.com/amith-ananthram/see-it-from-my-perspective | null | null | null | null | null | null | null | null | null |
2,406.11682 | Knowledge-to-Jailbreak: Investigating Knowledge-driven Jailbreaking
Attacks for Large Language Models | ['Shangqing Tu', 'Zhuoran Pan', 'Wenxuan Wang', 'Zhexin Zhang', 'Yuliang Sun', 'Jifan Yu', 'Hongning Wang', 'Lei Hou', 'Juanzi Li'] | ['cs.CL', 'cs.AI', 'cs.CR'] | Large language models (LLMs) have been increasingly applied to various
domains, which triggers increasing concerns about LLMs' safety on specialized
domains, e.g. medicine. Despite prior explorations on general jailbreaking
attacks, there are two challenges for applying existing attacks on testing the
domain-specific s... | 2024-06-17T15:59:59Z | Accepted by KDD 2025 research track | null | null | null | null | null | null | null | null | null |
2,406.11704 | Nemotron-4 340B Technical Report | ['Nvidia', ':', 'Bo Adler', 'Niket Agarwal', 'Ashwath Aithal', 'Dong H. Anh', 'Pallab Bhattacharya', 'Annika Brundyn', 'Jared Casper', 'Bryan Catanzaro', 'Sharon Clay', 'Jonathan Cohen', 'Sirshak Das', 'Ayush Dattagupta', 'Olivier Delalleau', 'Leon Derczynski', 'Yi Dong', 'Daniel Egert', 'Ellie Evans', 'Aleksander Fice... | ['cs.CL', 'cs.AI', 'cs.LG'] | We release the Nemotron-4 340B model family, including Nemotron-4-340B-Base,
Nemotron-4-340B-Instruct, and Nemotron-4-340B-Reward. Our models are open
access under the NVIDIA Open Model License Agreement, a permissive model
license that allows distribution, modification, and use of the models and its
outputs. These mod... | 2024-06-17T16:25:04Z | null | null | null | null | null | null | null | null | null | null |
2,406.11717 | Refusal in Language Models Is Mediated by a Single Direction | ['Andy Arditi', 'Oscar Obeso', 'Aaquib Syed', 'Daniel Paleka', 'Nina Panickssery', 'Wes Gurnee', 'Neel Nanda'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Conversational large language models are fine-tuned for both
instruction-following and safety, resulting in models that obey benign requests
but refuse harmful ones. While this refusal behavior is widespread across chat
models, its underlying mechanisms remain poorly understood. In this work, we
show that refusal is me... | 2024-06-17T16:36:12Z | null | null | null | null | null | null | null | null | null | null |
2,406.11727 | 1000 African Voices: Advancing inclusive multi-speaker multi-accent
speech synthesis | ['Sewade Ogun', 'Abraham T. Owodunni', 'Tobi Olatunji', 'Eniola Alese', 'Babatunde Oladimeji', 'Tejumade Afonja', 'Kayode Olaleye', 'Naome A. Etori', 'Tosin Adewumi'] | ['eess.AS', 'cs.CL'] | Recent advances in speech synthesis have enabled many useful applications
like audio directions in Google Maps, screen readers, and automated content
generation on platforms like TikTok. However, these systems are mostly
dominated by voices sourced from data-rich geographies with personas
representative of their source... | 2024-06-17T16:46:10Z | Accepted at Interspeech 2024 | null | null | 1000 African Voices: Advancing inclusive multi-speaker multi-accent speech synthesis | ['Sewade Ogun', 'Abraham Owodunni', 'Tobi Olatunji', 'Eniola Alese', 'Babatunde Oladimeji', 'Tejumade Afonja', 'Kayode Olaleye', 'Naome A. Etori', 'Tosin P. Adewumi'] | 2,024 | Interspeech | 6 | 32 | ['Computer Science', 'Engineering'] |
2,406.11736 | Interactive Evolution: A Neural-Symbolic Self-Training Framework For
Large Language Models | ['Fangzhi Xu', 'Qiushi Sun', 'Kanzhi Cheng', 'Jun Liu', 'Yu Qiao', 'Zhiyong Wu'] | ['cs.CL', 'cs.AI'] | One of the primary driving forces contributing to the superior performance of
Large Language Models (LLMs) is the extensive availability of human-annotated
natural language data, which is used for alignment fine-tuning. This inspired
researchers to investigate self-training methods to mitigate the extensive
reliance on... | 2024-06-17T16:52:56Z | 18 pages, 6 figures | null | null | Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models | ['Fangzhi Xu', 'Qiushi Sun', 'Kanzhi Cheng', 'Jun Liu', 'Yu Qiao', 'Zhiyong Wu'] | 2,024 | arXiv.org | 7 | 52 | ['Computer Science'] |
2,406.11794 | DataComp-LM: In search of the next generation of training sets for
language models | ['Jeffrey Li', 'Alex Fang', 'Georgios Smyrnis', 'Maor Ivgi', 'Matt Jordan', 'Samir Gadre', 'Hritik Bansal', 'Etash Guha', 'Sedrick Keh', 'Kushal Arora', 'Saurabh Garg', 'Rui Xin', 'Niklas Muennighoff', 'Reinhard Heckel', 'Jean Mercat', 'Mayee Chen', 'Suchin Gururangan', 'Mitchell Wortsman', 'Alon Albalak', 'Yonatan Bit... | ['cs.LG', 'cs.CL'] | We introduce DataComp for Language Models (DCLM), a testbed for controlled
dataset experiments with the goal of improving language models. As part of
DCLM, we provide a standardized corpus of 240T tokens extracted from Common
Crawl, effective pretraining recipes based on the OpenLM framework, and a broad
suite of 53 do... | 2024-06-17T17:42:57Z | Project page: https://www.datacomp.ai/dclm/ | null | null | null | null | null | null | null | null | null |
2,406.11816 | VideoLLM-online: Online Video Large Language Model for Streaming Video | ['Joya Chen', 'Zhaoyang Lv', 'Shiwei Wu', 'Kevin Qinghong Lin', 'Chenan Song', 'Difei Gao', 'Jia-Wei Liu', 'Ziteng Gao', 'Dongxing Mao', 'Mike Zheng Shou'] | ['cs.CV'] | Recent Large Language Models have been enhanced with vision capabilities,
enabling them to comprehend images, videos, and interleaved vision-language
content. However, the learning methods of these large multimodal models
typically treat videos as predetermined clips, making them less effective and
efficient at handlin... | 2024-06-17T17:55:32Z | CVPR 2024. This arxiv version is upgraded with Llama-3 | null | null | VideoLLM-online: Online Video Large Language Model for Streaming Video | ['Joya Chen', 'Zhaoyang Lv', 'Shiwei Wu', 'Kevin Qinghong Lin', 'Chenan Song', 'Difei Gao', 'Jia-Wei Liu', 'Ziteng Gao', 'Dongxing Mao', 'Mike Zheng Shou'] | 2,024 | Computer Vision and Pattern Recognition | 59 | 99 | ['Computer Science'] |
2,406.11817 | Iterative Length-Regularized Direct Preference Optimization: A Case
Study on Improving 7B Language Models to GPT-4 Level | ['Jie Liu', 'Zhanhui Zhou', 'Jiaheng Liu', 'Xingyuan Bu', 'Chao Yang', 'Han-Sen Zhong', 'Wanli Ouyang'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Direct Preference Optimization (DPO), a standard method for aligning language
models with human preferences, is traditionally applied to offline preferences.
Recent studies show that DPO benefits from iterative training with online
preferences labeled by a trained reward model. In this work, we identify a
pitfall of va... | 2024-06-17T17:55:38Z | null | null | null | null | null | null | null | null | null | null |
2,406.11823 | On Efficient Language and Vision Assistants for Visually-Situated
Natural Language Understanding: What Matters in Reading and Reasoning | ['Geewook Kim', 'Minjoon Seo'] | ['cs.CV', 'cs.CL'] | Recent advancements in language and vision assistants have showcased
impressive capabilities but suffer from a lack of transparency, limiting
broader research and reproducibility. While open-source models handle general
image tasks effectively, they face challenges with the high computational
demands of complex visuall... | 2024-06-17T17:57:30Z | EMNLP 2024 Main | null | null | On Efficient Language and Vision Assistants for Visually-Situated Natural Language Understanding: What Matters in Reading and Reasoning | ['Geewook Kim', 'Minjoon Seo'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 3 | 63 | ['Computer Science'] |
2,406.11827 | WPO: Enhancing RLHF with Weighted Preference Optimization | ['Wenxuan Zhou', 'Ravi Agrawal', 'Shujian Zhang', 'Sathish Reddy Indurthi', 'Sanqiang Zhao', 'Kaiqiang Song', 'Silei Xu', 'Chenguang Zhu'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Reinforcement learning from human feedback (RLHF) is a promising solution to
align large language models (LLMs) more closely with human values. Off-policy
preference optimization, where the preference data is obtained from other
models, is widely adopted due to its cost efficiency and scalability. However,
off-policy p... | 2024-06-17T17:59:13Z | EMNLP 2024 | null | null | null | null | null | null | null | null | null |
2,406.11832 | Unveiling Encoder-Free Vision-Language Models | ['Haiwen Diao', 'Yufeng Cui', 'Xiaotong Li', 'Yueze Wang', 'Huchuan Lu', 'Xinlong Wang'] | ['cs.CV', 'cs.MM'] | Existing vision-language models (VLMs) mostly rely on vision encoders to
extract visual features followed by large language models (LLMs) for
visual-language tasks. However, the vision encoders set a strong inductive bias
in abstracting visual representation, e.g., resolution, aspect ratio, and
semantic priors, which c... | 2024-06-17T17:59:44Z | 17 pages, 8 figures, Accepted by NeurIPS2024 (spotlight) | null | null | Unveiling Encoder-Free Vision-Language Models | ['Haiwen Diao', 'Yufeng Cui', 'Xiaotong Li', 'Yueze Wang', 'Huchuan Lu', 'Xinlong Wang'] | 2,024 | Neural Information Processing Systems | 36 | 85 | ['Computer Science'] |
2,406.11838 | Autoregressive Image Generation without Vector Quantization | ['Tianhong Li', 'Yonglong Tian', 'He Li', 'Mingyang Deng', 'Kaiming He'] | ['cs.CV'] | Conventional wisdom holds that autoregressive models for image generation are
typically accompanied by vector-quantized tokens. We observe that while a
discrete-valued space can facilitate representing a categorical distribution,
it is not a necessity for autoregressive modeling. In this work, we propose to
model the p... | 2024-06-17T17:59:58Z | Neurips 2024 (Spotlight). Code: https://github.com/LTH14/mar | null | null | Autoregressive Image Generation without Vector Quantization | ['Tianhong Li', 'Yonglong Tian', 'He Li', 'Mingyang Deng', 'Kaiming He'] | 2,024 | Neural Information Processing Systems | 238 | 56 | ['Computer Science'] |
2,406.11933 | Harnessing Massive Satellite Imagery with Efficient Masked Image
Modeling | ['Fengxiang Wang', 'Hongzhen Wang', 'Di Wang', 'Zonghao Guo', 'Zhenyu Zhong', 'Long Lan', 'Wenjing Yang', 'Jing Zhang'] | ['cs.CV'] | Masked Image Modeling (MIM) has become an essential method for building
foundational visual models in remote sensing (RS). However, the limitations in
size and diversity of existing RS datasets restrict the ability of MIM methods
to learn generalizable representations. Additionally, conventional MIM
techniques, which r... | 2024-06-17T15:41:57Z | ICCV 2025 | null | null | null | null | null | null | null | null | null |
2,406.11939 | From Crowdsourced Data to High-Quality Benchmarks: Arena-Hard and
BenchBuilder Pipeline | ['Tianle Li', 'Wei-Lin Chiang', 'Evan Frick', 'Lisa Dunlap', 'Tianhao Wu', 'Banghua Zhu', 'Joseph E. Gonzalez', 'Ion Stoica'] | ['cs.LG', 'cs.AI', 'cs.CL'] | The rapid evolution of Large Language Models (LLMs) has outpaced the
development of model evaluation, highlighting the need for continuous curation
of new, challenging benchmarks. However, manual curation of high-quality,
human-aligned benchmarks is expensive and time-consuming. To address this, we
introduce BenchBuild... | 2024-06-17T17:26:10Z | null | null | null | From Crowdsourced Data to High-Quality Benchmarks: Arena-Hard and BenchBuilder Pipeline | ['Tianle Li', 'Wei-Lin Chiang', 'Evan Frick', 'Lisa Dunlap', 'Tianhao Wu', 'Banghua Zhu', 'Joseph Gonzalez', 'Ion Stoica'] | 2,024 | arXiv.org | 182 | 66 | ['Computer Science'] |
2,406.11944 | Transcoders Find Interpretable LLM Feature Circuits | ['Jacob Dunefsky', 'Philippe Chlenski', 'Neel Nanda'] | ['cs.LG', 'cs.CL'] | A key goal in mechanistic interpretability is circuit analysis: finding
sparse subgraphs of models corresponding to specific behaviors or capabilities.
However, MLP sublayers make fine-grained circuit analysis on transformer-based
language models difficult. In particular, interpretable features -- such as
those found b... | 2024-06-17T17:49:00Z | 29 pages, 6 figures, 4 tables, 2 algorithms. NeurIPS 2024 | null | null | null | null | null | null | null | null | null |
2,406.12031 | Large Scale Transfer Learning for Tabular Data via Language Modeling | ['Josh Gardner', 'Juan C. Perdomo', 'Ludwig Schmidt'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Tabular data -- structured, heterogeneous, spreadsheet-style data with rows
and columns -- is widely used in practice across many domains. However, while
recent foundation models have reduced the need for developing task-specific
datasets and predictors in domains such as language modeling and computer
vision, this tra... | 2024-06-17T18:58:20Z | NeurIPS 2024 camera-ready updates | null | null | Large Scale Transfer Learning for Tabular Data via Language Modeling | ['Josh Gardner', 'Juan C. Perdomo', 'Ludwig Schmidt'] | 2,024 | Neural Information Processing Systems | 24 | 67 | ['Computer Science'] |
2,406.12042 | Not All Prompts Are Made Equal: Prompt-based Pruning of Text-to-Image
Diffusion Models | ['Alireza Ganjdanesh', 'Reza Shirkavand', 'Shangqian Gao', 'Heng Huang'] | ['cs.CV', 'cs.LG'] | Text-to-image (T2I) diffusion models have demonstrated impressive image
generation capabilities. Still, their computational intensity prohibits
resource-constrained organizations from deploying T2I models after fine-tuning
them on their internal target data. While pruning techniques offer a potential
solution to reduce... | 2024-06-17T19:22:04Z | null | null | null | null | null | null | null | null | null | null |
2,406.12056 | Learning Molecular Representation in a Cell | ['Gang Liu', 'Srijit Seal', 'John Arevalo', 'Zhenwen Liang', 'Anne E. Carpenter', 'Meng Jiang', 'Shantanu Singh'] | ['cs.LG', 'q-bio.QM'] | Predicting drug efficacy and safety in vivo requires information on
biological responses (e.g., cell morphology and gene expression) to small
molecule perturbations. However, current molecular representation learning
methods do not provide a comprehensive view of cell states under these
perturbations and struggle to re... | 2024-06-17T19:48:42Z | 20 pages, 5 tables, 7 figures | null | null | Learning Molecular Representation in a Cell | ['Gang Liu', 'Srijit Seal', 'John Arevalo', 'Zhenwen Liang', 'A. Carpenter', 'Meng Jiang', 'Shantanu Singh'] | 2,024 | International Conference on Learning Representations | 4 | 70 | ['Computer Science', 'Biology', 'Medicine'] |
2,406.12074 | COMMUNITY-CROSS-INSTRUCT: Unsupervised Instruction Generation for
Aligning Large Language Models to Online Communities | ['Zihao He', 'Minh Duc Chu', 'Rebecca Dorn', 'Siyi Guo', 'Kristina Lerman'] | ['cs.CL'] | Social scientists use surveys to probe the opinions and beliefs of
populations, but these methods are slow, costly, and prone to biases. Recent
advances in large language models (LLMs) enable the creating of computational
representations or "digital twins" of populations that generate human-like
responses mimicking the... | 2024-06-17T20:20:47Z | null | null | null | Community-Cross-Instruct: Unsupervised Instruction Generation for Aligning Large Language Models to Online Communities | ['Zihao He', 'Rebecca Dorn', 'Siyi Guo', 'Minh Duc Hoang Chu', 'Kristina Lerman'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 8 | 43 | ['Computer Science'] |
2,406.12182 | Aqulia-Med LLM: Pioneering Full-Process Open-Source Medical Language
Models | ['Lulu Zhao', 'Weihao Zeng', 'Xiaofeng Shi', 'Hua Zhou', 'Donglin Hao', 'Yonghua Lin'] | ['cs.CL', 'cs.AI'] | Recently, both closed-source LLMs and open-source communities have made
significant strides, outperforming humans in various general domains. However,
their performance in specific professional fields such as medicine, especially
within the open-source community, remains suboptimal due to the complexity of
medical know... | 2024-06-18T01:30:07Z | null | null | null | null | null | null | null | null | null | null |
2,406.12194 | Universal Score-based Speech Enhancement with High Content Preservation | ['Robin Scheibler', 'Yusuke Fujita', 'Yuma Shirahata', 'Tatsuya Komatsu'] | ['eess.AS', 'cs.SD'] | We propose UNIVERSE++, a universal speech enhancement method based on
score-based diffusion and adversarial training. Specifically, we improve the
existing UNIVERSE model that decouples clean speech feature extraction and
diffusion. Our contributions are three-fold. First, we make several
modifications to the network a... | 2024-06-18T01:49:00Z | 5 pages, 5 figures, accepted at Interspeech 2024 | null | null | Universal Score-based Speech Enhancement with High Content Preservation | ['Robin Scheibler', 'Yusuke Fujita', 'Yuma Shirahata', 'Tatsuya Komatsu'] | 2,024 | Interspeech | 15 | 52 | ['Computer Science', 'Engineering'] |
2,406.12246 | TroL: Traversal of Layers for Large Language and Vision Models | ['Byung-Kwan Lee', 'Sangyun Chung', 'Chae Won Kim', 'Beomchan Park', 'Yong Man Ro'] | ['cs.LG', 'cs.CL', 'cs.CV'] | Large language and vision models (LLVMs) have been driven by the
generalization power of large language models (LLMs) and the advent of visual
instruction tuning. Along with scaling them up directly, these models enable
LLVMs to showcase powerful vision language (VL) performances by covering
diverse tasks via natural l... | 2024-06-18T03:42:00Z | EMNLP 2024. Code is available in https://github.com/ByungKwanLee/TroL | null | null | TroL: Traversal of Layers for Large Language and Vision Models | ['Byung-Kwan Lee', 'Sangyun Chung', 'Chae Won Kim', 'Beomchan Park', 'Yonghyun Ro'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 7 | 101 | ['Computer Science'] |
2,406.12257 | CleanGen: Mitigating Backdoor Attacks for Generation Tasks in Large
Language Models | ['Yuetai Li', 'Zhangchen Xu', 'Fengqing Jiang', 'Luyao Niu', 'Dinuka Sahabandu', 'Bhaskar Ramasubramanian', 'Radha Poovendran'] | ['cs.AI', 'cs.CR'] | The remarkable performance of large language models (LLMs) in generation
tasks has enabled practitioners to leverage publicly available models to power
custom applications, such as chatbots and virtual assistants. However, the data
used to train or fine-tune these LLMs is often undisclosed, allowing an
attacker to comp... | 2024-06-18T04:10:38Z | This paper is presented at EMNLP 2024 | null | null | null | null | null | null | null | null | null |
2,406.12303 | Immiscible Diffusion: Accelerating Diffusion Training with Noise
Assignment | ['Yiheng Li', 'Heyang Jiang', 'Akio Kodaira', 'Masayoshi Tomizuka', 'Kurt Keutzer', 'Chenfeng Xu'] | ['cs.CV'] | In this paper, we point out that suboptimal noise-data mapping leads to slow
training of diffusion models. During diffusion training, current methods
diffuse each image across the entire noise space, resulting in a mixture of all
images at every point in the noise layer. We emphasize that this random mixture
of noise-d... | 2024-06-18T06:20:42Z | null | null | null | Immiscible Diffusion: Accelerating Diffusion Training with Noise Assignment | ['Yiheng Li', 'Heyang Jiang', 'Akio Kodaira', 'Masayoshi Tomizuka', 'Kurt Keutzer', 'Chenfeng Xu'] | 2,024 | Neural Information Processing Systems | 9 | 48 | ['Computer Science'] |
2,406.12428 | PSLM: Parallel Generation of Text and Speech with LLMs for Low-Latency
Spoken Dialogue Systems | ['Kentaro Mitsui', 'Koh Mitsuda', 'Toshiaki Wakatsuki', 'Yukiya Hono', 'Kei Sawada'] | ['cs.CL', 'cs.AI', 'cs.LG', 'cs.SD', 'eess.AS'] | Multimodal language models that process both text and speech have a potential
for applications in spoken dialogue systems. However, current models face two
major challenges in response generation latency: (1) generating a spoken
response requires the prior generation of a written response, and (2) speech
sequences are ... | 2024-06-18T09:23:54Z | 9 pages, 6 figures, 4 tables, accepted for Findings of EMNLP 2024.
Demo samples: https://rinnakk.github.io/research/publications/PSLM | null | null | PSLM: Parallel Generation of Text and Speech with LLMs for Low-Latency Spoken Dialogue Systems | ['Kentaro Mitsui', 'Koh Mitsuda', 'Toshiaki Wakatsuki', 'Yukiya Hono', 'Kei Sawada'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 6 | 34 | ['Computer Science', 'Engineering'] |
2,406.12634 | News Without Borders: Domain Adaptation of Multilingual Sentence
Embeddings for Cross-lingual News Recommendation | ['Andreea Iana', 'Fabian David Schmidt', 'Goran Glavaš', 'Heiko Paulheim'] | ['cs.IR', 'cs.AI', 'cs.CL', 'I.2.7; H.3.3'] | Rapidly growing numbers of multilingual news consumers pose an increasing
challenge to news recommender systems in terms of providing customized
recommendations. First, existing neural news recommenders, even when powered by
multilingual language models (LMs), suffer substantial performance losses in
zero-shot cross-li... | 2024-06-18T14:01:53Z | Accepted at the 47th European Conference on Information Retrieval
(ECIR 2025) Appendix A is provided only in the arXiv version | null | null | null | null | null | null | null | null | null |
2,406.12639 | Ask-before-Plan: Proactive Language Agents for Real-World Planning | ['Xuan Zhang', 'Yang Deng', 'Zifeng Ren', 'See-Kiong Ng', 'Tat-Seng Chua'] | ['cs.CL', 'cs.AI'] | The evolution of large language models (LLMs) has enhanced the planning
capabilities of language agents in diverse real-world scenarios. Despite these
advancements, the potential of LLM-powered agents to comprehend ambiguous user
instructions for reasoning and decision-making is still under exploration. In
this work, w... | 2024-06-18T14:07:28Z | Accepted by EMNLP 2024 Findings | null | null | null | null | null | null | null | null | null |
2,406.12739 | Self-Distillation for Model Stacking Unlocks Cross-Lingual NLU in 200+
Languages | ['Fabian David Schmidt', 'Philipp Borchert', 'Ivan Vulić', 'Goran Glavaš'] | ['cs.CL'] | LLMs have become a go-to solution not just for text generation, but also for
natural language understanding (NLU) tasks. Acquiring extensive knowledge
through language modeling on web-scale corpora, they excel on English NLU, yet
struggle to extend their NLU capabilities to underrepresented languages. In
contrast, mach... | 2024-06-18T16:00:20Z | null | null | null | Self-Distillation for Model Stacking Unlocks Cross-Lingual NLU in 200+ Languages | ['Fabian David Schmidt', 'Philipp Borchert', "Ivan Vuli'c", 'Goran Glavavs'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 6 | 44 | ['Computer Science'] |
2,406.12793 | ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All
Tools | ['Team GLM', ':', 'Aohan Zeng', 'Bin Xu', 'Bowen Wang', 'Chenhui Zhang', 'Da Yin', 'Dan Zhang', 'Diego Rojas', 'Guanyu Feng', 'Hanlin Zhao', 'Hanyu Lai', 'Hao Yu', 'Hongning Wang', 'Jiadai Sun', 'Jiajie Zhang', 'Jiale Cheng', 'Jiayi Gui', 'Jie Tang', 'Jing Zhang', 'Jingyu Sun', 'Juanzi Li', 'Lei Zhao', 'Lindong Wu', 'L... | ['cs.CL'] | We introduce ChatGLM, an evolving family of large language models that we
have been developing over time. This report primarily focuses on the GLM-4
language series, which includes GLM-4, GLM-4-Air, and GLM-4-9B. They represent
our most capable models that are trained with all the insights and lessons
gained from the p... | 2024-06-18T16:58:21Z | null | null | null | ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools | ['Team Glm Aohan Zeng', 'Bin Xu', 'Bowen Wang', 'Chenhui Zhang', 'Da Yin', 'Diego Rojas', 'Guanyu Feng', 'Hanlin Zhao', 'Hanyu Lai', 'Hao Yu', 'Hongning Wang', 'Jiadai Sun', 'Jiajie Zhang', 'Jiale Cheng', 'Jiayi Gui', 'Jie Tang', 'Jing Zhang', 'Juanzi Li', 'Lei Zhao', 'Lindong Wu', 'Lucen Zhong', 'Mingdao Liu', 'Minlie... | 2,024 | arXiv.org | 650 | 51 | ['Computer Science'] |
2,406.12845 | Interpretable Preferences via Multi-Objective Reward Modeling and
Mixture-of-Experts | ['Haoxiang Wang', 'Wei Xiong', 'Tengyang Xie', 'Han Zhao', 'Tong Zhang'] | ['cs.LG', 'cs.CL'] | Reinforcement learning from human feedback (RLHF) has emerged as the primary
method for aligning large language models (LLMs) with human preferences. The
RLHF process typically starts by training a reward model (RM) using human
preference data. Conventional RMs are trained on pairwise responses to the same
user request... | 2024-06-18T17:58:28Z | Technical report v1. Code and model are released at
https://github.com/RLHFlow/RLHF-Reward-Modeling/ | null | null | Interpretable Preferences via Multi-Objective Reward Modeling and Mixture-of-Experts | ['Haoxiang Wang', 'Wei Xiong', 'Tengyang Xie', 'Han Zhao', 'Tong Zhang'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 180 | 60 | ['Computer Science'] |
2,406.12925 | GLiNER multi-task: Generalist Lightweight Model for Various Information
Extraction Tasks | ['Ihor Stepanov', 'Mykhailo Shtopko'] | ['cs.LG', 'cs.AI', 'cs.CL', 'cs.IR'] | Information extraction tasks require both accurate, efficient, and
generalisable models. Classical supervised deep learning approaches can achieve
the required performance, but they need large datasets and are limited in their
ability to adapt to different tasks. On the other hand, large language models
(LLMs) demonstr... | 2024-06-14T13:54:29Z | 11 pages, 1 figure, 6 tables | null | null | null | null | null | null | null | null | null |
2,406.13181 | The Impact of Auxiliary Patient Data on Automated Chest X-Ray Report
Generation and How to Incorporate It | ['Aaron Nicolson', 'Shengyao Zhuang', 'Jason Dowling', 'Bevan Koopman'] | ['cs.CV'] | This study investigates the integration of diverse patient data sources into
multimodal language models for automated chest X-ray (CXR) report generation.
Traditionally, CXR report generation relies solely on CXR images and limited
radiology data, overlooking valuable information from patient health records,
particular... | 2024-06-19T03:25:31Z | null | null | null | The Impact of Auxiliary Patient Data on Automated Chest X-Ray Report Generation and How to Incorporate It | ['Aaron Nicolson', 'Shengyao Zhuang', 'Jason Dowling', 'Bevan Koopman'] | 2,024 | arXiv.org | 1 | 49 | ['Computer Science'] |
2,406.13337 | Medical Spoken Named Entity Recognition | ['Khai Le-Duc', 'David Thulke', 'Hung-Phong Tran', 'Long Vo-Dang', 'Khai-Nguyen Nguyen', 'Truong-Son Hy', 'Ralf Schlüter'] | ['eess.AS', 'cs.CL', 'cs.LG', 'cs.SD'] | Spoken Named Entity Recognition (NER) aims to extract named entities from
speech and categorise them into types like person, location, organization, etc.
In this work, we present VietMed-NER - the first spoken NER dataset in the
medical domain. To our knowledge, our Vietnamese real-world dataset is the
largest spoken N... | 2024-06-19T08:39:09Z | NAACL 2025, 60 pages | null | null | null | null | null | null | null | null | null |
2,406.13502 | ManWav: The First Manchu ASR Model | ['Jean Seo', 'Minha Kang', 'Sungjoo Byun', 'Sangah Lee'] | ['cs.CL', 'cs.SD', 'eess.AS'] | This study addresses the widening gap in Automatic Speech Recognition (ASR)
research between high resource and extremely low resource languages, with a
particular focus on Manchu, a critically endangered language. Manchu
exemplifies the challenges faced by marginalized linguistic communities in
accessing state-of-the-a... | 2024-06-19T12:47:34Z | ACL2024/Field Matters | null | null | ManWav: The First Manchu ASR Model | ['Jean Seo', 'Minha Kang', 'Sungjoo Byun', 'Sangah Lee'] | 2,024 | FIELDMATTERS | 1 | 19 | ['Computer Science', 'Engineering'] |
2,406.13642 | SpatialBot: Precise Spatial Understanding with Vision Language Models | ['Wenxiao Cai', 'Iaroslav Ponomarenko', 'Jianhao Yuan', 'Xiaoqi Li', 'Wankou Yang', 'Hao Dong', 'Bo Zhao'] | ['cs.CV'] | Vision Language Models (VLMs) have achieved impressive performance in 2D
image understanding, however they are still struggling with spatial
understanding which is the foundation of Embodied AI. In this paper, we propose
SpatialBot for better spatial understanding by feeding both RGB and depth
images. Additionally, we ... | 2024-06-19T15:41:30Z | null | null | null | null | null | null | null | null | null | null |
2,406.13764 | Can LLMs Reason in the Wild with Programs? | ['Yuan Yang', 'Siheng Xiong', 'Ali Payani', 'Ehsan Shareghi', 'Faramarz Fekri'] | ['cs.CL'] | Large Language Models (LLMs) have shown superior capability to solve
reasoning problems with programs. While being a promising direction, most of
such frameworks are trained and evaluated in settings with a prior knowledge of
task requirements. However, as LLMs become more capable, it is necessary to
assess their reaso... | 2024-06-19T18:26:19Z | null | null | null | Can LLMs Reason in the Wild with Programs? | ['Yuan Yang', 'Siheng Xiong', 'Ali Payani', 'Ehsan Shareghi', 'F. Fekri'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 16 | 28 | ['Computer Science'] |
2,406.13807 | AlanaVLM: A Multimodal Embodied AI Foundation Model for Egocentric Video
Understanding | ['Alessandro Suglia', 'Claudio Greco', 'Katie Baker', 'Jose L. Part', 'Ioannis Papaioannou', 'Arash Eshghi', 'Ioannis Konstas', 'Oliver Lemon'] | ['cs.CV', 'cs.AI', 'cs.CL'] | AI personal assistants deployed via robots or wearables require embodied
understanding to collaborate with humans effectively. However, current
Vision-Language Models (VLMs) primarily focus on third-person view videos,
neglecting the richness of egocentric perceptual experience. To address this
gap, we propose three ke... | 2024-06-19T20:14:14Z | Code available https://github.com/alanaai/EVUD | null | null | null | null | null | null | null | null | null |
2,406.1413 | ExVideo: Extending Video Diffusion Models via Parameter-Efficient
Post-Tuning | ['Zhongjie Duan', 'Wenmeng Zhou', 'Cen Chen', 'Yaliang Li', 'Weining Qian'] | ['cs.CV'] | Recently, advancements in video synthesis have attracted significant
attention. Video synthesis models such as AnimateDiff and Stable Video
Diffusion have demonstrated the practical applicability of diffusion models in
creating dynamic visual content. The emergence of SORA has further spotlighted
the potential of video... | 2024-06-20T09:18:54Z | 8 pages, 5 figures | null | null | ExVideo: Extending Video Diffusion Models via Parameter-Efficient Post-Tuning | ['Zhongjie Duan', 'Wenmeng Zhou', 'Cen Chen', 'Yaliang Li', 'Weining Qian'] | 2,024 | arXiv.org | 2 | 47 | ['Computer Science'] |
2,406.14177 | SimulSeamless: FBK at IWSLT 2024 Simultaneous Speech Translation | ['Sara Papi', 'Marco Gaido', 'Matteo Negri', 'Luisa Bentivogli'] | ['cs.CL', 'cs.AI', 'cs.SD', 'eess.AS'] | This paper describes the FBK's participation in the Simultaneous Translation
Evaluation Campaign at IWSLT 2024. For this year's submission in the
speech-to-text translation (ST) sub-track, we propose SimulSeamless, which is
realized by combining AlignAtt and SeamlessM4T in its medium configuration. The
SeamlessM4T mode... | 2024-06-20T10:34:46Z | null | null | null | null | null | null | null | null | null | null |
2,406.14239 | LeYOLO, New Embedded Architecture for Object Detection | ['Lilian Hollard', 'Lucas Mohimont', 'Nathalie Gaveau', 'Luiz Angelo Steffenel'] | ['cs.CV'] | Efficient computation in deep neural networks is crucial for real-time object
detection. However, recent advancements primarily result from improved
high-performing hardware rather than improving parameters and FLOP efficiency.
This is especially evident in the latest YOLO architectures, where speed is
prioritized over... | 2024-06-20T12:08:24Z | https://crv.pubpub.org/pub/sae4lpdf | Proceedings of the Conference on Robots and Vision (2025) | 10.21428/d82e957c.aed2cb06 | null | null | null | null | null | null | null |
2,406.14272 | MultiTalk: Enhancing 3D Talking Head Generation Across Languages with
Multilingual Video Dataset | ['Kim Sung-Bin', 'Lee Chae-Yeon', 'Gihun Son', 'Oh Hyun-Bin', 'Janghoon Ju', 'Suekyeong Nam', 'Tae-Hyun Oh'] | ['cs.CV', 'cs.GR'] | Recent studies in speech-driven 3D talking head generation have achieved
convincing results in verbal articulations. However, generating accurate
lip-syncs degrades when applied to input speech in other languages, possibly
due to the lack of datasets covering a broad spectrum of facial movements
across languages. In th... | 2024-06-20T12:52:46Z | Interspeech 2024 | null | null | null | null | null | null | null | null | null |
2,406.14294 | DASB - Discrete Audio and Speech Benchmark | ['Pooneh Mousavi', 'Luca Della Libera', 'Jarod Duret', 'Artem Ploujnikov', 'Cem Subakan', 'Mirco Ravanelli'] | ['cs.SD', 'cs.AI', 'eess.AS'] | Discrete audio tokens have recently gained considerable attention for their
potential to connect audio and language processing, enabling the creation of
modern multimodal large language models. Ideal audio tokens must effectively
preserve phonetic and semantic content along with paralinguistic information,
speaker iden... | 2024-06-20T13:23:27Z | 9 pages, 5 tables | null | null | DASB - Discrete Audio and Speech Benchmark | ['Pooneh Mousavi', 'Luca Della Libera', 'J. Duret', 'Artem Ploujnikov', 'Cem Subakan', 'M. Ravanelli'] | 2,024 | arXiv.org | 21 | 80 | ['Computer Science', 'Engineering'] |
2,406.14377 | CE-SSL: Computation-Efficient Semi-Supervised Learning for ECG-based
Cardiovascular Diseases Detection | ['Rushuang Zhou', 'Lei Clifton', 'Zijun Liu', 'Kannie W. Y. Chan', 'David A. Clifton', 'Yuan-Ting Zhang', 'Yining Dong'] | ['cs.LG', 'cs.AI'] | The label scarcity problem is the main challenge that hinders the wide
application of deep learning systems in automatic cardiovascular diseases
(CVDs) detection using electrocardiography (ECG). Tuning pre-trained models
alleviates this problem by transferring knowledge learned from large datasets
to downstream small d... | 2024-06-20T14:45:13Z | null | null | null | null | null | null | null | null | null | null |
2,406.14408 | FVEL: Interactive Formal Verification Environment with Large Language
Models via Theorem Proving | ['Xiaohan Lin', 'Qingxing Cao', 'Yinya Huang', 'Haiming Wang', 'Jianqiao Lu', 'Zhengying Liu', 'Linqi Song', 'Xiaodan Liang'] | ['cs.AI', 'cs.CL', 'cs.LG'] | Formal verification (FV) has witnessed growing significance with current
emerging program synthesis by the evolving large language models (LLMs).
However, current formal verification mainly resorts to symbolic verifiers or
hand-craft rules, resulting in limitations for extensive and flexible
verification. On the other ... | 2024-06-20T15:31:05Z | null | null | null | null | null | null | null | null | null | null |
2,406.14491 | Instruction Pre-Training: Language Models are Supervised Multitask
Learners | ['Daixuan Cheng', 'Yuxian Gu', 'Shaohan Huang', 'Junyu Bi', 'Minlie Huang', 'Furu Wei'] | ['cs.CL'] | Unsupervised multitask pre-training has been the critical method behind the
recent success of language models (LMs). However, supervised multitask learning
still holds significant promise, as scaling it in the post-training stage
trends towards better generalization. In this paper, we explore supervised
multitask pre-t... | 2024-06-20T16:55:33Z | EMNLP 2024 Main Conference | null | null | null | null | null | null | null | null | null |
2,406.14528 | DeciMamba: Exploring the Length Extrapolation Potential of Mamba | ['Assaf Ben-Kish', 'Itamar Zimerman', 'Shady Abu-Hussein', 'Nadav Cohen', 'Amir Globerson', 'Lior Wolf', 'Raja Giryes'] | ['cs.LG', 'cs.AI'] | Long-range sequence processing poses a significant challenge for Transformers
due to their quadratic complexity in input length. A promising alternative is
Mamba, which demonstrates high performance and achieves Transformer-level
capabilities while requiring substantially fewer computational resources. In
this paper we... | 2024-06-20T17:40:18Z | Official Implementation: https://github.com/assafbk/DeciMamba | null | null | DeciMamba: Exploring the Length Extrapolation Potential of Mamba | ['Assaf Ben-Kish', 'Itamar Zimerman', 'Shady Abu-Hussein', 'Nadav Cohen', 'Amir Globerson', 'Lior Wolf', 'Raja Giryes'] | 2,024 | International Conference on Learning Representations | 20 | 65 | ['Computer Science'] |
2,406.14544 | Prism: A Framework for Decoupling and Assessing the Capabilities of VLMs | ['Yuxuan Qiao', 'Haodong Duan', 'Xinyu Fang', 'Junming Yang', 'Lin Chen', 'Songyang Zhang', 'Jiaqi Wang', 'Dahua Lin', 'Kai Chen'] | ['cs.CV', 'cs.CL'] | Vision Language Models (VLMs) demonstrate remarkable proficiency in
addressing a wide array of visual questions, which requires strong perception
and reasoning faculties. Assessing these two competencies independently is
crucial for model refinement, despite the inherent difficulty due to the
intertwined nature of seei... | 2024-06-20T17:54:03Z | null | null | null | Prism: A Framework for Decoupling and Assessing the Capabilities of VLMs | ['Yu Qiao', 'Haodong Duan', 'Xinyu Fang', 'Junming Yang', 'Lin Chen', 'Songyang Zhang', 'Jiaqi Wang', 'Dahua Lin', 'Kai Chen'] | 2,024 | Neural Information Processing Systems | 23 | 61 | ['Computer Science'] |
2,406.14546 | Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from
Disparate Training Data | ['Johannes Treutlein', 'Dami Choi', 'Jan Betley', 'Samuel Marks', 'Cem Anil', 'Roger Grosse', 'Owain Evans'] | ['cs.CL', 'cs.AI', 'cs.LG'] | One way to address safety risks from large language models (LLMs) is to
censor dangerous knowledge from their training data. While this removes the
explicit information, implicit information can remain scattered across various
training documents. Could an LLM infer the censored knowledge by piecing
together these impli... | 2024-06-20T17:55:04Z | Accepted at NeurIPS 2024. 10 pages, 8 figures | null | null | null | null | null | null | null | null | null |
2,406.14553 | xCOMET-lite: Bridging the Gap Between Efficiency and Quality in Learned
MT Evaluation Metrics | ['Daniil Larionov', 'Mikhail Seleznyov', 'Vasiliy Viskov', 'Alexander Panchenko', 'Steffen Eger'] | ['cs.CL'] | State-of-the-art trainable machine translation evaluation metrics like xCOMET
achieve high correlation with human judgment but rely on large encoders (up to
10.7B parameters), making them computationally expensive and inaccessible to
researchers with limited resources. To address this issue, we investigate
whether the ... | 2024-06-20T17:58:34Z | EMNLP 2024 (Main Conference) Camera-Ready Version | null | null | null | null | null | null | null | null | null |
2,406.14598 | SORRY-Bench: Systematically Evaluating Large Language Model Safety
Refusal | ['Tinghao Xie', 'Xiangyu Qi', 'Yi Zeng', 'Yangsibo Huang', 'Udari Madhushani Sehwag', 'Kaixuan Huang', 'Luxi He', 'Boyi Wei', 'Dacheng Li', 'Ying Sheng', 'Ruoxi Jia', 'Bo Li', 'Kai Li', 'Danqi Chen', 'Peter Henderson', 'Prateek Mittal'] | ['cs.AI'] | Evaluating aligned large language models' (LLMs) ability to recognize and
reject unsafe user requests is crucial for safe, policy-compliant deployments.
Existing evaluation efforts, however, face three limitations that we address
with SORRY-Bench, our proposed benchmark. First, existing methods often use
coarse-grained... | 2024-06-20T17:56:07Z | Paper accepted to ICLR 2025 | null | null | SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal Behaviors | ['Tinghao Xie', 'Xiangyu Qi', 'Yi Zeng', 'Yangsibo Huang', 'Udari Madhushani Sehwag', 'Kaixuan Huang', 'Luxi He', 'Boyi Wei', 'Dacheng Li', 'Ying Sheng', 'Ruoxi Jia', 'Bo Li', 'Kai Li', 'Danqi Chen', 'Peter Henderson', 'Prateek Mittal'] | 2,024 | International Conference on Learning Representations | 79 | 55 | ['Computer Science'] |
2,406.14643 | Holistic Evaluation for Interleaved Text-and-Image Generation | ['Minqian Liu', 'Zhiyang Xu', 'Zihao Lin', 'Trevor Ashby', 'Joy Rimchala', 'Jiaxin Zhang', 'Lifu Huang'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Interleaved text-and-image generation has been an intriguing research
direction, where the models are required to generate both images and text
pieces in an arbitrary order. Despite the emerging advancements in interleaved
generation, the progress in its evaluation still significantly lags behind.
Existing evaluation b... | 2024-06-20T18:07:19Z | EMNLP 2024 Main Conference. 15 pages, 6 figures, 7 tables. Website:
https://vt-nlp.github.io/InterleavedEval/. Dataset:
https://huggingface.co/mqliu/InterleavedBench | null | null | null | null | null | null | null | null | null |
2,406.14712 | Qiskit HumanEval: An Evaluation Benchmark For Quantum Code Generative
Models | ['Sanjay Vishwakarma', 'Francis Harkins', 'Siddharth Golecha', 'Vishal Sharathchandra Bajpe', 'Nicolas Dupuis', 'Luca Buratti', 'David Kremer', 'Ismael Faro', 'Ruchir Puri', 'Juan Cruz-Benito'] | ['quant-ph', 'cs.AI'] | Quantum programs are typically developed using quantum Software Development
Kits (SDKs). The rapid advancement of quantum computing necessitates new tools
to streamline this development process, and one such tool could be Generative
Artificial intelligence (GenAI). In this study, we introduce and use the Qiskit
HumanEv... | 2024-06-20T20:14:22Z | null | null | null | Qiskit HumanEval: An Evaluation Benchmark for Quantum Code Generative Models | ['Sanjay Vishwakarma', 'Francis Harkins', 'Siddharth Golecha', 'Vishal Sharathchandra Bajpe', 'Nicolas Dupuis', 'Luca Buratti', 'David Kremer', 'Ismael Faro', 'Ruchir Puri', 'Juan Cruz-Benito'] | 2,024 | International Conference on Quantum Computing and Engineering | 3 | 25 | ['Physics', 'Computer Science'] |
2,406.14775 | Machine Learning Global Simulation of Nonlocal Gravity Wave Propagation | ['Aman Gupta', 'Aditi Sheshadri', 'Sujit Roy', 'Vishal Gaur', 'Manil Maskey', 'Rahul Ramachandran'] | ['physics.ao-ph', 'cs.LG', 'physics.flu-dyn', 'physics.geo-ph'] | Global climate models typically operate at a grid resolution of hundreds of
kilometers and fail to resolve atmospheric mesoscale processes, e.g., clouds,
precipitation, and gravity waves (GWs). Model representation of these processes
and their sources is essential to the global circulation and planetary energy
budget, ... | 2024-06-20T22:57:38Z | International Conference on Machine Learning 2024 | null | null | null | null | null | null | null | null | null |
2,406.14835 | ToVo: Toxicity Taxonomy via Voting | ['Tinh Son Luong', 'Thanh-Thien Le', 'Thang Viet Doan', 'Linh Ngo Van', 'Thien Huu Nguyen', 'Diep Thi-Ngoc Nguyen'] | ['cs.CL', 'cs.LG'] | Existing toxic detection models face significant limitations, such as lack of
transparency, customization, and reproducibility. These challenges stem from
the closed-source nature of their training data and the paucity of explanations
for their evaluation mechanism. To address these issues, we propose a dataset
creatio... | 2024-06-21T02:35:30Z | Findings of NAACL 2025 | null | null | null | null | null | null | null | null | null |
2,406.14868 | Direct Multi-Turn Preference Optimization for Language Agents | ['Wentao Shi', 'Mengqi Yuan', 'Junkang Wu', 'Qifan Wang', 'Fuli Feng'] | ['cs.CL', 'cs.LG'] | Adapting Large Language Models (LLMs) for agent tasks is critical in
developing language agents. Direct Preference Optimization (DPO) is a promising
technique for this adaptation with the alleviation of compounding errors,
offering a means to directly optimize Reinforcement Learning (RL) objectives.
However, applying D... | 2024-06-21T05:13:20Z | Accepted by EMNLP 2024 Main | null | null | null | null | null | null | null | null | null |
2,406.14875 | GLOBE: A High-quality English Corpus with Global Accents for Zero-shot
Speaker Adaptive Text-to-Speech | ['Wenbin Wang', 'Yang Song', 'Sanjay Jha'] | ['cs.SD', 'eess.AS'] | This paper introduces GLOBE, a high-quality English corpus with worldwide
accents, specifically designed to address the limitations of current zero-shot
speaker adaptive Text-to-Speech (TTS) systems that exhibit poor
generalizability in adapting to speakers with accents. Compared to commonly
used English corpora, such ... | 2024-06-21T05:55:45Z | Interspeech 2024, 4 pages, 3 figures | null | null | GLOBE: A High-quality English Corpus with Global Accents for Zero-shot Speaker Adaptive Text-to-Speech | ['Wenbin Wang', 'Yang Song', 'Sanjay Jha'] | 2,024 | Interspeech | 10 | 41 | ['Computer Science', 'Engineering'] |
2,406.14882 | 70B-parameter large language models in Japanese medical
question-answering | ['Issey Sukeda', 'Risa Kishikawa', 'Satoshi Kodera'] | ['cs.CL'] | Since the rise of large language models (LLMs), the domain adaptation has
been one of the hot topics in various domains. Many medical LLMs trained with
English medical dataset have made public recently. However, Japanese LLMs in
medical domain still lack its research. Here we utilize multiple 70B-parameter
LLMs for the... | 2024-06-21T06:04:10Z | 7 pages, 2 figures, 4 Tables | null | null | null | null | null | null | null | null | null |
2,406.15252 | VideoScore: Building Automatic Metrics to Simulate Fine-grained Human
Feedback for Video Generation | ['Xuan He', 'Dongfu Jiang', 'Ge Zhang', 'Max Ku', 'Achint Soni', 'Sherman Siu', 'Haonan Chen', 'Abhranil Chandra', 'Ziyan Jiang', 'Aaran Arulraj', 'Kai Wang', 'Quy Duc Do', 'Yuansheng Ni', 'Bohan Lyu', 'Yaswanth Narsupalli', 'Rongqi Fan', 'Zhiheng Lyu', 'Yuchen Lin', 'Wenhu Chen'] | ['cs.CV', 'cs.AI'] | The recent years have witnessed great advances in video generation. However,
the development of automatic video metrics is lagging significantly behind.
None of the existing metric is able to provide reliable scores over generated
videos. The main barrier is the lack of large-scale human-annotated dataset. In
this pape... | 2024-06-21T15:43:46Z | null | null | null | VideoScore: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation | ['Xuan He', 'Dongfu Jiang', 'Ge Zhang', 'Max W.F. Ku', 'Achint Soni', 'Sherman Siu', 'Haonan Chen', 'Abhranil Chandra', 'Ziyan Jiang', 'Aaran Arulraj', 'Kai Wang', 'Quy Duc Do', 'Yuansheng Ni', 'Bohan Lyu', 'Yaswanth Narsupalli', 'Rongqi "Richard" Fan', 'Zhiheng Lyu', 'Yuchen Lin', 'Wenhu Chen'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 56 | 72 | ['Computer Science'] |
2,406.15339 | Image Conductor: Precision Control for Interactive Video Synthesis | ['Yaowei Li', 'Xintao Wang', 'Zhaoyang Zhang', 'Zhouxia Wang', 'Ziyang Yuan', 'Liangbin Xie', 'Yuexian Zou', 'Ying Shan'] | ['cs.CV', 'cs.AI', 'cs.MM'] | Filmmaking and animation production often require sophisticated techniques
for coordinating camera transitions and object movements, typically involving
labor-intensive real-world capturing. Despite advancements in generative AI for
video creation, achieving precise control over motion for interactive video
asset gener... | 2024-06-21T17:55:05Z | Project webpage available at
https://liyaowei-stu.github.io/project/ImageConductor/ | null | null | Image Conductor: Precision Control for Interactive Video Synthesis | ['Yaowei Li', 'Xintao Wang', 'Zhaoyang Zhang', 'Zhouxia Wang', 'Ziyang Yuan', 'Liangbin Xie', 'Yuexian Zou', 'Ying Shan'] | 2,024 | arXiv.org | 27 | 43 | ['Computer Science'] |
2,406.15349 | NAVSIM: Data-Driven Non-Reactive Autonomous Vehicle Simulation and
Benchmarking | ['Daniel Dauner', 'Marcel Hallgarten', 'Tianyu Li', 'Xinshuo Weng', 'Zhiyu Huang', 'Zetong Yang', 'Hongyang Li', 'Igor Gilitschenski', 'Boris Ivanovic', 'Marco Pavone', 'Andreas Geiger', 'Kashyap Chitta'] | ['cs.CV', 'cs.AI', 'cs.LG', 'cs.RO'] | Benchmarking vision-based driving policies is challenging. On one hand,
open-loop evaluation with real data is easy, but these results do not reflect
closed-loop performance. On the other, closed-loop evaluation is possible in
simulation, but is hard to scale due to its significant computational demands.
Further, the s... | 2024-06-21T17:59:02Z | NeurIPS 2024 Datasets and Benchmarks | null | null | null | null | null | null | null | null | null |
2,406.15487 | Improving Text-To-Audio Models with Synthetic Captions | ['Zhifeng Kong', 'Sang-gil Lee', 'Deepanway Ghosal', 'Navonil Majumder', 'Ambuj Mehrish', 'Rafael Valle', 'Soujanya Poria', 'Bryan Catanzaro'] | ['cs.CL', 'cs.LG', 'cs.SD', 'eess.AS'] | It is an open challenge to obtain high quality training data, especially
captions, for text-to-audio models. Although prior methods have leveraged
\textit{text-only language models} to augment and improve captions, such
methods have limitations related to scale and coherence between audio and
captions. In this work, we... | 2024-06-18T00:02:15Z | null | null | null | Improving Text-To-Audio Models with Synthetic Captions | ['Zhifeng Kong', 'Sang-gil Lee', 'Deepanway Ghosal', 'Navonil Majumder', 'Ambuj Mehrish', 'Rafael Valle', 'Soujanya Poria', 'Bryan Catanzaro'] | 2,024 | Synthetic Data’s Transformative Role in Foundational Speech Models | 13 | 38 | ['Computer Science', 'Engineering'] |
2,406.15593 | News Deja Vu: Connecting Past and Present with Semantic Search | ['Brevin Franklin', 'Emily Silcock', 'Abhishek Arora', 'Tom Bryan', 'Melissa Dell'] | ['cs.CL', 'econ.GN', 'q-fin.EC'] | Social scientists and the general public often analyze contemporary events by
drawing parallels with the past, a process complicated by the vast, noisy, and
unstructured nature of historical texts. For example, hundreds of millions of
page scans from historical newspapers have been noisily transcribed.
Traditional spar... | 2024-06-21T18:50:57Z | null | null | null | null | null | null | null | null | null | null |
2,406.15657 | FIRST: Faster Improved Listwise Reranking with Single Token Decoding | ['Revanth Gangi Reddy', 'JaeHyeok Doo', 'Yifei Xu', 'Md Arafat Sultan', 'Deevya Swain', 'Avirup Sil', 'Heng Ji'] | ['cs.IR'] | Large Language Models (LLMs) have significantly advanced the field of
information retrieval, particularly for reranking. Listwise LLM rerankers have
showcased superior performance and generalizability compared to existing
supervised approaches. However, conventional listwise LLM reranking methods
lack efficiency as the... | 2024-06-21T21:27:50Z | Preprint | null | null | null | null | null | null | null | null | null |
2,406.15669 | CARE: a Benchmark Suite for the Classification and Retrieval of Enzymes | ['Jason Yang', 'Ariane Mora', 'Shengchao Liu', 'Bruce J. Wittmann', 'Anima Anandkumar', 'Frances H. Arnold', 'Yisong Yue'] | ['q-bio.BM', 'cs.LG'] | Enzymes are important proteins that catalyze chemical reactions. In recent
years, machine learning methods have emerged to predict enzyme function from
sequence; however, there are no standardized benchmarks to evaluate these
methods. We introduce CARE, a benchmark and dataset suite for the
Classification And Retrieval... | 2024-06-21T22:01:05Z | null | null | null | CARE: a Benchmark Suite for the Classification and Retrieval of Enzymes | ['Jason Yang', 'Ariane Mora', 'Shengchao Liu', 'Bruce J. Wittmann', 'Anima Anandkumar', 'Frances H. Arnold', 'Yisong Yue'] | 2,024 | Neural Information Processing Systems | 7 | 107 | ['Computer Science', 'Biology'] |
2,406.15695 | SS-GEN: A Social Story Generation Framework with Large Language Models | ['Yi Feng', 'Mingyang Song', 'Jiaqi Wang', 'Zhuang Chen', 'Guanqun Bi', 'Minlie Huang', 'Liping Jing', 'Jian Yu'] | ['cs.CL'] | Children with Autism Spectrum Disorder (ASD) often misunderstand social
situations and struggle to participate in daily routines. Social Stories are
traditionally crafted by psychology experts under strict constraints to address
these challenges but are costly and limited in diversity. As Large Language
Models (LLMs) a... | 2024-06-22T00:14:48Z | AAAI 2025 (Oral) | null | null | null | null | null | null | null | null | null |
2,406.15704 | video-SALMONN: Speech-Enhanced Audio-Visual Large Language Models | ['Guangzhi Sun', 'Wenyi Yu', 'Changli Tang', 'Xianzhao Chen', 'Tian Tan', 'Wei Li', 'Lu Lu', 'Zejun Ma', 'Yuxuan Wang', 'Chao Zhang'] | ['cs.CV'] | Speech understanding as an element of the more generic video understanding
using audio-visual large language models (av-LLMs) is a crucial yet
understudied aspect. This paper proposes video-SALMONN, a single end-to-end
av-LLM for video processing, which can understand not only visual frame
sequences, audio events and m... | 2024-06-22T01:36:11Z | Accepted at ICML 2024. arXiv admin note: substantial text overlap
with arXiv:2310.05863 | null | null | null | null | null | null | null | null | null |
2,406.15718 | Beyond the Turn-Based Game: Enabling Real-Time Conversations with Duplex
Models | ['Xinrong Zhang', 'Yingfa Chen', 'Shengding Hu', 'Xu Han', 'Zihang Xu', 'Yuanwei Xu', 'Weilin Zhao', 'Maosong Sun', 'Zhiyuan Liu'] | ['cs.CL'] | As large language models (LLMs) increasingly permeate daily lives, there is a
growing demand for real-time interactions that mirror human conversations.
Traditional turn-based chat systems driven by LLMs prevent users from verbally
interacting with the system while it is generating responses. To overcome these
limitati... | 2024-06-22T03:20:10Z | null | null | null | null | null | null | null | null | null | null |
2,406.15888 | Real-time Speech Summarization for Medical Conversations | ['Khai Le-Duc', 'Khai-Nguyen Nguyen', 'Long Vo-Dang', 'Truong-Son Hy'] | ['cs.CL', 'cs.AI', 'cs.LG', 'cs.SD', 'eess.AS'] | In doctor-patient conversations, identifying medically relevant information
is crucial, posing the need for conversation summarization. In this work, we
propose the first deployable real-time speech summarization system for
real-world applications in industry, which generates a local summary after
every N speech uttera... | 2024-06-22T16:37:51Z | Interspeech 2024 (Oral) | null | null | Real-time Speech Summarization for Medical Conversations | ['Khai Le-Duc', 'Khai-Nguyen Nguyen', 'Long Vo-Dang', 'Truong-Son Hy'] | 2,024 | Interspeech | 2 | 26 | ['Computer Science', 'Engineering'] |
2,406.15979 | Deep Learning Segmentation of Ascites on Abdominal CT Scans for
Automatic Volume Quantification | ['Benjamin Hou', 'Sung-Won Lee', 'Jung-Min Lee', 'Christopher Koh', 'Jing Xiao', 'Perry J. Pickhardt', 'Ronald M. Summers'] | ['eess.IV', 'cs.CV'] | Purpose: To evaluate the performance of an automated deep learning method in
detecting ascites and subsequently quantifying its volume in patients with
liver cirrhosis and ovarian cancer.
Materials and Methods: This retrospective study included contrast-enhanced
and non-contrast abdominal-pelvic CT scans of patients ... | 2024-06-23T01:32:53Z | null | null | 10.1148/ryai.230601 | null | null | null | null | null | null | null |
2,406.1602 | AudioBench: A Universal Benchmark for Audio Large Language Models | ['Bin Wang', 'Xunlong Zou', 'Geyu Lin', 'Shuo Sun', 'Zhuohan Liu', 'Wenyu Zhang', 'Zhengyuan Liu', 'AiTi Aw', 'Nancy F. Chen'] | ['cs.SD', 'cs.CL', 'eess.AS'] | We introduce AudioBench, a universal benchmark designed to evaluate Audio
Large Language Models (AudioLLMs). It encompasses 8 distinct tasks and 26
datasets, among which, 7 are newly proposed datasets. The evaluation targets
three main aspects: speech understanding, audio scene understanding, and voice
understanding (p... | 2024-06-23T05:40:26Z | v5 - Update acknowledgment; Code:
https://github.com/AudioLLMs/AudioBench | null | null | AudioBench: A Universal Benchmark for Audio Large Language Models | ['Bin Wang', 'Xunlong Zou', 'Geyu Lin', 'Shuo Sun', 'Zhuohan Liu', 'Wenyu Zhang', 'Zhengyuan Liu', 'AiTi Aw', 'Nancy F. Chen'] | 2,024 | North American Chapter of the Association for Computational Linguistics | 35 | 78 | ['Computer Science', 'Engineering'] |
2,406.16148 | Towards Open Respiratory Acoustic Foundation Models: Pretraining and
Benchmarking | ['Yuwei Zhang', 'Tong Xia', 'Jing Han', 'Yu Wu', 'Georgios Rizos', 'Yang Liu', 'Mohammed Mosuily', 'Jagmohan Chauhan', 'Cecilia Mascolo'] | ['cs.SD', 'cs.AI', 'cs.LG', 'eess.AS'] | Respiratory audio, such as coughing and breathing sounds, has predictive
power for a wide range of healthcare applications, yet is currently
under-explored. The main problem for those applications arises from the
difficulty in collecting large labeled task-specific data for model
development. Generalizable respiratory ... | 2024-06-23T16:04:26Z | accepted by NeurIPS 2024 Track Datasets and Benchmarks | null | null | Towards Open Respiratory Acoustic Foundation Models: Pretraining and Benchmarking | ['Yuwei Zhang', 'Tong Xia', 'Jing Han', 'Y. Wu', 'Georgios Rizos', 'Yang Liu', 'Mohammed Mosuily', 'Jagmohan Chauhan', 'Cecilia Mascolo'] | 2,024 | Neural Information Processing Systems | 12 | 76 | ['Computer Science', 'Engineering'] |
2,406.16192 | HEST-1k: A Dataset for Spatial Transcriptomics and Histology Image
Analysis | ['Guillaume Jaume', 'Paul Doucet', 'Andrew H. Song', 'Ming Y. Lu', 'Cristina Almagro-Pérez', 'Sophia J. Wagner', 'Anurag J. Vaidya', 'Richard J. Chen', 'Drew F. K. Williamson', 'Ahrong Kim', 'Faisal Mahmood'] | ['cs.CV'] | Spatial transcriptomics enables interrogating the molecular composition of
tissue with ever-increasing resolution and sensitivity. However, costs, rapidly
evolving technology, and lack of standards have constrained computational
methods in ST to narrow tasks and small cohorts. In addition, the underlying
tissue morphol... | 2024-06-23T19:04:13Z | NeurIPS'24 Spotlight | null | null | HEST-1k: A Dataset for Spatial Transcriptomics and Histology Image Analysis | ['Guillaume Jaume', 'Paul Doucet', 'Andrew H. Song', 'Ming Y. Lu', "Cristina Almagro-P'erez", 'Sophia J. Wagner', 'Anurag Vaidya', 'Richard J. Chen', 'Drew F. K. Williamson', 'Ahrong Kim', 'Faisal Mahmood'] | 2,024 | Neural Information Processing Systems | 35 | 174 | ['Computer Science'] |
2,406.16223 | Continuous Output Personality Detection Models via Mixed Strategy
Training | ['Rong Wang', 'Kun Sun'] | ['cs.CL', 'cs.AI'] | The traditional personality models only yield binary results. This paper
presents a novel approach for training personality detection models that
produce continuous output values, using mixed strategies. By leveraging the
PANDORA dataset, which includes extensive personality labeling of Reddit
comments, we developed mo... | 2024-06-23T21:32:15Z | null | null | null | Continuous Output Personality Detection Models via Mixed Strategy Training | ['Rong Wang', 'Kun Sun'] | 2,024 | arXiv.org | 2 | 20 | ['Computer Science'] |
2,406.16235 | Preference Tuning For Toxicity Mitigation Generalizes Across Languages | ['Xiaochen Li', 'Zheng-Xin Yong', 'Stephen H. Bach'] | ['cs.CL', 'cs.AI', 'cs.CR', 'cs.LG'] | Detoxifying multilingual Large Language Models (LLMs) has become crucial due
to their increasing global use. In this work, we explore zero-shot
cross-lingual generalization of preference tuning in detoxifying LLMs. Unlike
previous studies that show limited cross-lingual generalization for other
safety tasks, we demonst... | 2024-06-23T22:53:47Z | Findings of EMNLP 2024 | null | null | null | null | null | null | null | null | null |
2,406.16314 | DreamVoice: Text-Guided Voice Conversion | ['Jiarui Hai', 'Karan Thakkar', 'Helin Wang', 'Zengyi Qin', 'Mounya Elhilali'] | ['eess.AS'] | Generative voice technologies are rapidly evolving, offering opportunities
for more personalized and inclusive experiences. Traditional one-shot voice
conversion (VC) requires a target recording during inference, limiting ease of
usage in generating desired voice timbres. Text-guided generation offers an
intuitive solu... | 2024-06-24T04:46:50Z | Accepted at INTERSPEECH 2024 | null | null | null | null | null | null | null | null | null |
2,406.16554 | LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual
Pre-training | ['Tong Zhu', 'Xiaoye Qu', 'Daize Dong', 'Jiacheng Ruan', 'Jingqi Tong', 'Conghui He', 'Yu Cheng'] | ['cs.CL'] | Mixture-of-Experts (MoE) has gained increasing popularity as a promising
framework for scaling up large language models (LLMs). However, training MoE
from scratch in a large-scale setting still suffers from data-hungry and
instability problems. Motivated by this limit, we investigate building MoE
models from existing d... | 2024-06-24T11:43:07Z | null | null | null | null | null | null | null | null | null | null |
2,406.1662 | OmAgent: A Multi-modal Agent Framework for Complex Video Understanding
with Task Divide-and-Conquer | ['Lu Zhang', 'Tiancheng Zhao', 'Heting Ying', 'Yibo Ma', 'Kyusong Lee'] | ['cs.CV', 'cs.CL'] | Recent advancements in Large Language Models (LLMs) have expanded their
capabilities to multimodal contexts, including comprehensive video
understanding. However, processing extensive videos such as 24-hour CCTV
footage or full-length films presents significant challenges due to the vast
data and processing demands. Tr... | 2024-06-24T13:05:39Z | null | null | null | null | null | null | null | null | null | null |
2,406.16678 | Segment Any Text: A Universal Approach for Robust, Efficient and
Adaptable Sentence Segmentation | ['Markus Frohmann', 'Igor Sterner', 'Ivan Vulić', 'Benjamin Minixhofer', 'Markus Schedl'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Segmenting text into sentences plays an early and crucial role in many NLP
systems. This is commonly achieved by using rule-based or statistical methods
relying on lexical features such as punctuation. Although some recent works no
longer exclusively rely on punctuation, we find that no prior method achieves
all of (i)... | 2024-06-24T14:36:11Z | Accepted to EMNLP 2024 Main | null | null | Segment Any Text: A Universal Approach for Robust, Efficient and Adaptable Sentence Segmentation | ['Markus Frohmann', 'Igor Sterner', "Ivan Vuli'c", 'Benjamin Minixhofer', 'Markus Schedl'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 20 | 73 | ['Computer Science'] |
2,406.16793 | Adam-mini: Use Fewer Learning Rates To Gain More | ['Yushun Zhang', 'Congliang Chen', 'Ziniu Li', 'Tian Ding', 'Chenwei Wu', 'Diederik P. Kingma', 'Yinyu Ye', 'Zhi-Quan Luo', 'Ruoyu Sun'] | ['cs.LG', 'cs.AI'] | We propose Adam-mini, an optimizer that achieves on par or better performance
than AdamW with 50% less memory footprint. Adam-mini reduces memory by cutting
down the learning rate resources in Adam (i.e., $1/\sqrt{v}$). By investigating
the Hessian structure of neural nets, we find Adam's $v$ might not function at
its ... | 2024-06-24T16:56:41Z | null | null | null | null | null | null | null | null | null | null |
2,406.16852 | Long Context Transfer from Language to Vision | ['Peiyuan Zhang', 'Kaichen Zhang', 'Bo Li', 'Guangtao Zeng', 'Jingkang Yang', 'Yuanhan Zhang', 'Ziyue Wang', 'Haoran Tan', 'Chunyuan Li', 'Ziwei Liu'] | ['cs.CV'] | Video sequences offer valuable temporal information, but existing large
multimodal models (LMMs) fall short in understanding extremely long videos.
Many works address this by reducing the number of visual tokens using visual
resamplers. Alternatively, in this paper, we approach this problem from the
perspective of the ... | 2024-06-24T17:58:06Z | Code, demo, and models are available at
https://github.com/EvolvingLMMs-Lab/LongVA | null | null | Long Context Transfer from Language to Vision | ['Peiyuan Zhang', 'Kaichen Zhang', 'Bo Li', 'Guangtao Zeng', 'Jingkang Yang', 'Yuanhan Zhang', 'Ziyue Wang', 'Haoran Tan', 'Chunyuan Li', 'Ziwei Liu'] | 2,024 | arXiv.org | 189 | 61 | ['Computer Science'] |
2,406.16858 | EAGLE-2: Faster Inference of Language Models with Dynamic Draft Trees | ['Yuhui Li', 'Fangyun Wei', 'Chao Zhang', 'Hongyang Zhang'] | ['cs.CL', 'cs.LG'] | Inference with modern Large Language Models (LLMs) is expensive and
time-consuming, and speculative sampling has proven to be an effective
solution. Most speculative sampling methods such as EAGLE use a static draft
tree, implicitly assuming that the acceptance rate of draft tokens depends only
on their position. Inter... | 2024-06-24T17:59:11Z | null | null | null | null | null | null | null | null | null | null |
2,406.1686 | Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs | ['Shengbang Tong', 'Ellis Brown', 'Penghao Wu', 'Sanghyun Woo', 'Manoj Middepogu', 'Sai Charitha Akula', 'Jihan Yang', 'Shusheng Yang', 'Adithya Iyer', 'Xichen Pan', 'Ziteng Wang', 'Rob Fergus', 'Yann LeCun', 'Saining Xie'] | ['cs.CV'] | We introduce Cambrian-1, a family of multimodal LLMs (MLLMs) designed with a
vision-centric approach. While stronger language models can enhance multimodal
capabilities, the design choices for vision components are often insufficiently
explored and disconnected from visual representation learning research. This
gap hin... | 2024-06-24T17:59:42Z | NeurIPS 2024 (Oral). Website at https://cambrian-mllm.github.io | null | null | null | null | null | null | null | null | null |
2,406.171 | FaceScore: Benchmarking and Enhancing Face Quality in Human Generation | ['Zhenyi Liao', 'Qingsong Xie', 'Chen Chen', 'Hannan Lu', 'Zhijie Deng'] | ['cs.CV'] | Diffusion models (DMs) have achieved significant success in generating
imaginative images given textual descriptions. However, they are likely to fall
short when it comes to real-life scenarios with intricate details. The
low-quality, unrealistic human faces in text-to-image generation are one of the
most prominent iss... | 2024-06-24T19:39:59Z | Under review | null | null | null | null | null | null | null | null | null |
2,406.17233 | Self-Constructed Context Decompilation with Fined-grained Alignment
Enhancement | ['Yunlong Feng', 'Dechuan Teng', 'Yang Xu', 'Honglin Mu', 'Xiao Xu', 'Libo Qin', 'Qingfu Zhu', 'Wanxiang Che'] | ['cs.SE', 'cs.CL'] | Decompilation transforms compiled code back into a high-level programming
language for analysis when source code is unavailable. Previous work has
primarily focused on enhancing decompilation performance by increasing the
scale of model parameters or training data for pre-training. Based on the
characteristics of the d... | 2024-06-25T02:37:53Z | EMNLP 2024 Findings | null | null | Self-Constructed Context Decompilation with Fined-grained Alignment Enhancement | ['ylfeng', 'Yang Xu', 'Dechuan Teng', 'Honglin Mu', 'Xiao Xu', 'Libo Qin', 'Wanxiang Che', 'Qingfu Zhu'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 4 | 27 | ['Computer Science'] |
2,406.17294 | Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large
Language Models | ['Wenhao Shi', 'Zhiqiang Hu', 'Yi Bin', 'Junhua Liu', 'Yang Yang', 'See-Kiong Ng', 'Lidong Bing', 'Roy Ka-Wei Lee'] | ['cs.CL'] | Large language models (LLMs) have demonstrated impressive reasoning
capabilities, particularly in textual mathematical problem-solving. However,
existing open-source image instruction fine-tuning datasets, containing limited
question-answer pairs per image, do not fully exploit visual information to
enhance the multimo... | 2024-06-25T05:43:21Z | Accepted at Findings of EMNLP2024 | null | null | null | null | null | null | null | null | null |
2,406.17295 | Less can be more for predicting properties with large language models | ['Nawaf Alampara', 'Santiago Miret', 'Kevin Maik Jablonka'] | ['cond-mat.mtrl-sci', 'cs.LG'] | Predicting properties from coordinate-category data -- sets of vectors paired
with categorical information -- is fundamental to computational science. In
materials science, this challenge manifests as predicting properties like
formation energies or elastic moduli from crystal structures comprising atomic
positions (ve... | 2024-06-25T05:45:07Z | null | null | null | null | null | null | null | null | null | null |
2,406.17305 | Retrieval Augmented Instruction Tuning for Open NER with Large Language
Models | ['Tingyu Xie', 'Jian Zhang', 'Yan Zhang', 'Yuanyuan Liang', 'Qi Li', 'Hongwei Wang'] | ['cs.CL'] | The strong capability of large language models (LLMs) has been applied to
information extraction (IE) through either retrieval augmented prompting or
instruction tuning (IT). However, the best way to incorporate information with
LLMs for IE remains an open question. In this paper, we explore Retrieval
Augmented Instruc... | 2024-06-25T06:24:50Z | To be appeared at COLING 2025 | null | null | null | null | null | null | null | null | null |
2,406.17345 | NerfBaselines: Consistent and Reproducible Evaluation of Novel View
Synthesis Methods | ['Jonas Kulhanek', 'Torsten Sattler'] | ['cs.CV'] | Novel view synthesis is an important problem with many applications,
including AR/VR, gaming, and simulations for robotics. With the recent rapid
development of Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS)
methods, it is becoming difficult to keep track of the current state of the art
(SoTA) due to m... | 2024-06-25T07:58:47Z | Web: https://jkulhanek.com/nerfbaselines | null | null | null | null | null | null | null | null | null |
2,406.17404 | Make Some Noise: Unlocking Language Model Parallel Inference Capability
through Noisy Training | ['Yixuan Wang', 'Xianzhen Luo', 'Fuxuan Wei', 'Yijun Liu', 'Qingfu Zhu', 'Xuanyu Zhang', 'Qing Yang', 'Dongliang Xu', 'Wanxiang Che'] | ['cs.CL', 'cs.LG'] | Existing speculative decoding methods typically require additional model
structure and training processes to assist the model for draft token
generation. This makes the migration of acceleration methods to the new model
more costly and more demanding on device memory. To address this problem, we
propose the Make Some N... | 2024-06-25T09:25:39Z | EMNLP 2024, camera ready | null | null | null | null | null | null | null | null | null |
2,406.17415 | Layer-Wise Quantization: A Pragmatic and Effective Method for Quantizing
LLMs Beyond Integer Bit-Levels | ['Razvan-Gabriel Dumitru', 'Vikas Yadav', 'Rishabh Maheshwary', 'Paul-Ioan Clotan', 'Sathwik Tejaswi Madhusudhan', 'Mihai Surdeanu'] | ['cs.CL', 'cs.AI', 'cs.LG', 'I.2.7; I.2.0'] | We present a simple meta quantization approach that quantizes different
layers of a large language model (LLM) at different bit levels, and is
independent of the underlying quantization technique. Specifically, we quantize
the most important layers to higher bit precision and less important layers to
lower bits. We pro... | 2024-06-25T09:37:15Z | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.