arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,404.12141 | MolCRAFT: Structure-Based Drug Design in Continuous Parameter Space | ['Yanru Qu', 'Keyue Qiu', 'Yuxuan Song', 'Jingjing Gong', 'Jiawei Han', 'Mingyue Zheng', 'Hao Zhou', 'Wei-Ying Ma'] | ['q-bio.BM', 'cs.LG'] | Generative models for structure-based drug design (SBDD) have shown promising
results in recent years. Existing works mainly focus on how to generate
molecules with higher binding affinity, ignoring the feasibility prerequisites
for generated 3D poses and resulting in false positives. We conduct thorough
studies on key... | 2024-04-18T12:43:39Z | Accepted to ICML 2024 | null | null | MolCRAFT: Structure-Based Drug Design in Continuous Parameter Space | ['Yanru Qu', 'Keyue Qiu', 'Yuxuan Song', 'Jingjing Gong', 'Jiawei Han', 'Mingyue Zheng', 'Hao Zhou', 'Wei-Ying Ma'] | 2,024 | International Conference on Machine Learning | 20 | 37 | ['Biology', 'Computer Science'] |
2,404.12195 | OpenBezoar: Small, Cost-Effective and Open Models Trained on Mixes of
Instruction Data | ['Chandeepa Dissanayake', 'Lahiru Lowe', 'Sachith Gunasekara', 'Yasiru Ratnayake'] | ['cs.CL', 'cs.LG'] | Instruction fine-tuning pretrained LLMs for diverse downstream tasks has
demonstrated remarkable success and has captured the interest of both academics
and practitioners. To ensure such fine-tuned LLMs align with human preferences,
techniques such as RLHF and DPO have emerged. At the same time, there is
increasing int... | 2024-04-18T13:57:18Z | 25 pages, 27 Figures, 8 Tables | null | null | OpenBezoar: Small, Cost-Effective and Open Models Trained on Mixes of Instruction Data | ['Chandeepa Dissanayake', 'Lahiru Lowe', 'Sachith Gunasekara', 'Yasiru Ratnayake'] | 2,024 | arXiv.org | 2 | 38 | ['Computer Science'] |
2,404.12224 | Length Generalization of Causal Transformers without Position Encoding | ['Jie Wang', 'Tao Ji', 'Yuanbin Wu', 'Hang Yan', 'Tao Gui', 'Qi Zhang', 'Xuanjing Huang', 'Xiaoling Wang'] | ['cs.CL'] | Generalizing to longer sentences is important for recent Transformer-based
language models. Besides algorithms manipulating explicit position features,
the success of Transformers without position encodings (NoPE) provides a new
way to overcome the challenge. In this paper, we study the length
generalization property o... | 2024-04-18T14:38:32Z | null | null | null | Length Generalization of Causal Transformers without Position Encoding | ['Jie Wang', 'Tao Ji', 'Yuanbin Wu', 'Hang Yan', 'Tao Gui', 'Qi Zhang', 'Xuanjing Huang', 'Xiaoling Wang'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 23 | 38 | ['Computer Science'] |
2,404.12241 | Introducing v0.5 of the AI Safety Benchmark from MLCommons | ['Bertie Vidgen', 'Adarsh Agrawal', 'Ahmed M. Ahmed', 'Victor Akinwande', 'Namir Al-Nuaimi', 'Najla Alfaraj', 'Elie Alhajjar', 'Lora Aroyo', 'Trupti Bavalatti', 'Max Bartolo', 'Borhane Blili-Hamelin', 'Kurt Bollacker', 'Rishi Bomassani', 'Marisa Ferrara Boston', 'Siméon Campos', 'Kal Chakra', 'Canyu Chen', 'Cody Colema... | ['cs.CL', 'cs.AI'] | This paper introduces v0.5 of the AI Safety Benchmark, which has been created
by the MLCommons AI Safety Working Group. The AI Safety Benchmark has been
designed to assess the safety risks of AI systems that use chat-tuned language
models. We introduce a principled approach to specifying and constructing the
benchmark,... | 2024-04-18T15:01:00Z | null | null | null | null | null | null | null | null | null | null |
2,404.12342 | Large Language Models in Targeted Sentiment Analysis | ['Nicolay Rusnachenko', 'Anton Golubev', 'Natalia Loukachevitch'] | ['cs.CL'] | In this paper we investigate the use of decoder-based generative transformers
for extracting sentiment towards the named entities in Russian news articles.
We study sentiment analysis capabilities of instruction-tuned large language
models (LLMs). We consider the dataset of RuSentNE-2023 in our study. The first
group o... | 2024-04-18T17:16:16Z | Fine-tuned Flan-T5-xl outperforms the top #1 results of
transformer-based classifier in RuSentNE-2023 competition, to appear in
Lobachevskii Journal of Mathematics No.8/2024 proceedings | null | null | null | null | null | null | null | null | null |
2,404.1239 | BLINK: Multimodal Large Language Models Can See but Not Perceive | ['Xingyu Fu', 'Yushi Hu', 'Bangzheng Li', 'Yu Feng', 'Haoyu Wang', 'Xudong Lin', 'Dan Roth', 'Noah A. Smith', 'Wei-Chiu Ma', 'Ranjay Krishna'] | ['cs.CV', 'cs.AI', 'cs.CL'] | We introduce Blink, a new benchmark for multimodal language models (LLMs)
that focuses on core visual perception abilities not found in other
evaluations. Most of the Blink tasks can be solved by humans "within a blink"
(e.g., relative depth estimation, visual correspondence, forensics detection,
and multi-view reasoni... | 2024-04-18T17:59:54Z | Multimodal Benchmark, Project Url: https://zeyofu.github.io/blink/,
ECCV 2024 | null | null | null | null | null | null | null | null | null |
2,404.125 | UIClip: A Data-driven Model for Assessing User Interface Design | ['Jason Wu', 'Yi-Hao Peng', 'Amanda Li', 'Amanda Swearngin', 'Jeffrey P. Bigham', 'Jeffrey Nichols'] | ['cs.HC', 'cs.CL', 'cs.CV'] | User interface (UI) design is a difficult yet important task for ensuring the
usability, accessibility, and aesthetic qualities of applications. In our
paper, we develop a machine-learned model, UIClip, for assessing the design
quality and visual relevance of a UI given its screenshot and natural language
description. ... | 2024-04-18T20:43:08Z | null | null | null | null | null | null | null | null | null | null |
2,404.12501 | SPIdepth: Strengthened Pose Information for Self-supervised Monocular
Depth Estimation | ['Mykola Lavreniuk'] | ['cs.CV', 'eess.IV'] | Self-supervised monocular depth estimation has garnered considerable
attention for its applications in autonomous driving and robotics. While recent
methods have made strides in leveraging techniques like the Self Query Layer
(SQL) to infer depth from motion, they often overlook the potential of
strengthening pose info... | 2024-04-18T20:43:33Z | null | null | null | null | null | null | null | null | null | null |
2,404.12636 | MORepair: Teaching LLMs to Repair Code via Multi-Objective Fine-tuning | ['Boyang Yang', 'Haoye Tian', 'Jiadong Ren', 'Hongyu Zhang', 'Jacques Klein', 'Tegawendé F. Bissyandé', 'Claire Le Goues', 'Shunfu Jin'] | ['cs.SE'] | Within the realm of software engineering, specialized tasks on code, such as
program repair, present unique challenges, necessitating fine-tuning Large
language models~(LLMs) to unlock state-of-the-art performance. Fine-tuning
approaches proposed in the literature for LLMs on program repair tasks
generally overlook the... | 2024-04-19T05:36:21Z | null | null | null | null | null | null | null | null | null | null |
2,404.13028 | When Life gives you LLMs, make LLM-ADE: Large Language Models with
Adaptive Data Engineering | ['Stephen Choi', 'William Gazeley'] | ['cs.CE', 'cs.AI'] | This paper presents the LLM-ADE framework, a novel methodology for continued
pre-training of large language models (LLMs) that addresses the challenges of
catastrophic forgetting and double descent. LLM-ADE employs dynamic
architectural adjustments, including selective block freezing and expansion,
tailored to specific... | 2024-04-19T17:43:26Z | 6 pages, 3 tables and 3 figures | null | null | null | null | null | null | null | null | null |
2,404.13046 | MoVA: Adapting Mixture of Vision Experts to Multimodal Context | ['Zhuofan Zong', 'Bingqi Ma', 'Dazhong Shen', 'Guanglu Song', 'Hao Shao', 'Dongzhi Jiang', 'Hongsheng Li', 'Yu Liu'] | ['cs.CV'] | As the key component in multimodal large language models (MLLMs), the ability
of the visual encoder greatly affects MLLM's understanding on diverse image
content. Although some large-scale pretrained vision encoders such as vision
encoders in CLIP and DINOv2 have brought promising performance, we found that
there is st... | 2024-04-19T17:59:48Z | NeurIPS 2024 | null | null | null | null | null | null | null | null | null |
2,404.13364 | MahaSQuAD: Bridging Linguistic Divides in Marathi Question-Answering | ['Ruturaj Ghatage', 'Aditya Kulkarni', 'Rajlaxmi Patil', 'Sharvi Endait', 'Raviraj Joshi'] | ['cs.CL', 'cs.LG'] | Question-answering systems have revolutionized information retrieval, but
linguistic and cultural boundaries limit their widespread accessibility. This
research endeavors to bridge the gap of the absence of efficient QnA datasets
in low-resource languages by translating the English Question Answering Dataset
(SQuAD) us... | 2024-04-20T12:16:35Z | Accepted at the International Conference on Natural Language
Processing (ICON 2023) | null | null | null | null | null | null | null | null | null |
2,404.13397 | Retrieval-Augmented Generation-based Relation Extraction | ['Sefika Efeoglu', 'Adrian Paschke'] | ['cs.CL', 'cs.AI'] | Information Extraction (IE) is a transformative process that converts
unstructured text data into a structured format by employing entity and
relation extraction (RE) methodologies. The identification of the relation
between a pair of entities plays a crucial role within this framework. Despite
the existence of various... | 2024-04-20T14:42:43Z | Submitted to Semantic Web Journal. Under Review | null | null | Retrieval-Augmented Generation-based Relation Extraction | ['Sefika Efeoglu', 'Adrian Paschke'] | 2,024 | arXiv.org | 9 | 32 | ['Computer Science'] |
2,404.13686 | Hyper-SD: Trajectory Segmented Consistency Model for Efficient Image
Synthesis | ['Yuxi Ren', 'Xin Xia', 'Yanzuo Lu', 'Jiacheng Zhang', 'Jie Wu', 'Pan Xie', 'Xing Wang', 'Xuefeng Xiao'] | ['cs.CV'] | Recently, a series of diffusion-aware distillation algorithms have emerged to
alleviate the computational overhead associated with the multi-step inference
process of Diffusion Models (DMs). Current distillation techniques often
dichotomize into two distinct aspects: i) ODE Trajectory Preservation; and ii)
ODE Trajecto... | 2024-04-21T15:16:05Z | Accepted by NeurIPS 2024 (Camera-Ready Version). Project Page:
https://hyper-sd.github.io/ | null | null | null | null | null | null | null | null | null |
2,404.13903 | Accelerating Image Generation with Sub-path Linear Approximation Model | ['Chen Xu', 'Tianhui Song', 'Weixin Feng', 'Xubin Li', 'Tiezheng Ge', 'Bo Zheng', 'Limin Wang'] | ['cs.CV'] | Diffusion models have significantly advanced the state of the art in image,
audio, and video generation tasks. However, their applications in practical
scenarios are hindered by slow inference speed. Drawing inspiration from the
approximation strategies utilized in consistency models, we propose the
Sub-path Linear App... | 2024-04-22T06:25:17Z | null | null | null | null | null | null | null | null | null | null |
2,404.14047 | An empirical study of LLaMA3 quantization: from LLMs to MLLMs | ['Wei Huang', 'Xingyu Zheng', 'Xudong Ma', 'Haotong Qin', 'Chengtao Lv', 'Hong Chen', 'Jie Luo', 'Xiaojuan Qi', 'Xianglong Liu', 'Michele Magno'] | ['cs.LG'] | The LLaMA family, a collection of foundation language models ranging from 7B
to 65B parameters, has become one of the most powerful open-source large
language models (LLMs) and the popular LLM backbone of multi-modal large
language models (MLLMs), widely used in computer vision and natural language
understanding tasks.... | 2024-04-22T10:03:03Z | null | null | 10.1007/s44267-024-00070-x | An empirical study of LLaMA3 quantization: from LLMs to MLLMs | ['Wei Huang', 'Xudong Ma', 'Haotong Qin', 'Xingyu Zheng', 'Chengtao Lv', 'Hong Chen', 'Jie Luo', 'Xiaojuan Qi', 'Xianglong Liu', 'Michele Magno'] | 2,024 | Vis. Intell. | 42 | 41 | ['Computer Science', 'Medicine'] |
2,404.14215 | Text-Tuple-Table: Towards Information Integration in Text-to-Table
Generation via Global Tuple Extraction | ['Zheye Deng', 'Chunkit Chan', 'Weiqi Wang', 'Yuxi Sun', 'Wei Fan', 'Tianshi Zheng', 'Yauwai Yim', 'Yangqiu Song'] | ['cs.CL'] | The task of condensing large chunks of textual information into concise and
structured tables has gained attention recently due to the emergence of Large
Language Models (LLMs) and their potential benefit for downstream tasks, such
as text summarization and text mining. Previous approaches often generate
tables that di... | 2024-04-22T14:31:28Z | Accepted to EMNLP 2024 | null | null | Text-Tuple-Table: Towards Information Integration in Text-to-Table Generation via Global Tuple Extraction | ['Zheye Deng', 'Chunkit Chan', 'Weiqi Wang', 'Yuxi Sun', 'Wei Fan', 'Tianshi ZHENG', 'Yauwai Yim', 'Yangqiu Song'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 15 | 50 | ['Computer Science'] |
2,404.14219 | Phi-3 Technical Report: A Highly Capable Language Model Locally on Your
Phone | ['Marah Abdin', 'Jyoti Aneja', 'Hany Awadalla', 'Ahmed Awadallah', 'Ammar Ahmad Awan', 'Nguyen Bach', 'Amit Bahree', 'Arash Bakhtiari', 'Jianmin Bao', 'Harkirat Behl', 'Alon Benhaim', 'Misha Bilenko', 'Johan Bjorck', 'Sébastien Bubeck', 'Martin Cai', 'Qin Cai', 'Vishrav Chaudhary', 'Dong Chen', 'Dongdong Chen', 'Weizhu... | ['cs.CL', 'cs.AI'] | We introduce phi-3-mini, a 3.8 billion parameter language model trained on
3.3 trillion tokens, whose overall performance, as measured by both academic
benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and
GPT-3.5 (e.g., phi-3-mini achieves 69% on MMLU and 8.38 on MT-bench), despite
being smal... | 2024-04-22T14:32:33Z | 24 pages | null | null | null | null | null | null | null | null | null |
2,404.14396 | SEED-X: Multimodal Models with Unified Multi-granularity Comprehension
and Generation | ['Yuying Ge', 'Sijie Zhao', 'Jinguo Zhu', 'Yixiao Ge', 'Kun Yi', 'Lin Song', 'Chen Li', 'Xiaohan Ding', 'Ying Shan'] | ['cs.CV'] | The rapid evolution of multimodal foundation model has demonstrated
significant progresses in vision-language understanding and generation, e.g.,
our previous work SEED-LLaMA. However, there remains a gap between its
capability and the real-world applicability, primarily due to the model's
limited capacity to effective... | 2024-04-22T17:56:09Z | We added benchmark results (without updating models) and ablation
study in this version. Project released at:
https://github.com/AILab-CVC/SEED-X | null | null | null | null | null | null | null | null | null |
2,404.14397 | RTP-LX: Can LLMs Evaluate Toxicity in Multilingual Scenarios? | ['Adrian de Wynter', 'Ishaan Watts', 'Tua Wongsangaroonsri', 'Minghui Zhang', 'Noura Farra', 'Nektar Ege Altıntoprak', 'Lena Baur', 'Samantha Claudet', 'Pavel Gajdusek', 'Can Gören', 'Qilong Gu', 'Anna Kaminska', 'Tomasz Kaminski', 'Ruby Kuo', 'Akiko Kyuba', 'Jongho Lee', 'Kartik Mathur', 'Petter Merok', 'Ivana Milovan... | ['cs.CL', 'cs.CY', 'cs.LG'] | Large language models (LLMs) and small language models (SLMs) are being
adopted at remarkable speed, although their safety still remains a serious
concern. With the advent of multilingual S/LLMs, the question now becomes a
matter of scale: can we expand multilingual safety evaluations of these models
with the same velo... | 2024-04-22T17:56:26Z | AAAI 2025--camera ready + extended abstract | null | 10.1609/aaai.v39i27.35011 | RTP-LX: Can LLMs Evaluate Toxicity in Multilingual Scenarios? | ['Adrian de Wynter', 'Ishaan Watts', 'Nektar Ege Altintoprak', 'Tua Wongsangaroonsri', 'Minghui Zhang', 'Noura Farra', 'Lena Baur', 'Samantha Claudet', 'Pavel Gajdusek', 'Can Gören', 'Qilong Gu', 'Anna Kaminska', 'Tomasz Kaminski', 'Ruby Kuo', 'Akiko Kyuba', 'Jongho Lee', 'Kartik Mathur', 'Petter Merok', 'Ivana Milovan... | 2,024 | AAAI Conference on Artificial Intelligence | 21 | 31 | ['Computer Science'] |
2,404.14406 | Hyp-OC: Hyperbolic One Class Classification for Face Anti-Spoofing | ['Kartik Narayan', 'Vishal M. Patel'] | ['cs.CV'] | Face recognition technology has become an integral part of modern security
systems and user authentication processes. However, these systems are
vulnerable to spoofing attacks and can easily be circumvented. Most prior
research in face anti-spoofing (FAS) approaches it as a two-class
classification task where models ar... | 2024-04-22T17:59:18Z | Accepted in FG2024, Project Page -
https://kartik-3004.github.io/hyp-oc/ | null | null | null | null | null | null | null | null | null |
2,404.14461 | Competition Report: Finding Universal Jailbreak Backdoors in Aligned
LLMs | ['Javier Rando', 'Francesco Croce', 'Kryštof Mitka', 'Stepan Shabalin', 'Maksym Andriushchenko', 'Nicolas Flammarion', 'Florian Tramèr'] | ['cs.CL', 'cs.AI', 'cs.CR', 'cs.LG'] | Large language models are aligned to be safe, preventing users from
generating harmful content like misinformation or instructions for illegal
activities. However, previous work has shown that the alignment process is
vulnerable to poisoning attacks. Adversaries can manipulate the safety training
data to inject backdoo... | 2024-04-22T05:08:53Z | Competition Report | null | null | Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs | ['Javier Rando', 'Francesco Croce', 'Kryvstof Mitka', 'Stepan Shabalin', 'Maksym Andriushchenko', 'Nicolas Flammarion', 'Florian Tramèr'] | 2,024 | arXiv.org | 17 | 22 | ['Computer Science'] |
2,404.14568 | UVMap-ID: A Controllable and Personalized UV Map Generative Model | ['Weijie Wang', 'Jichao Zhang', 'Chang Liu', 'Xia Li', 'Xingqian Xu', 'Humphrey Shi', 'Nicu Sebe', 'Bruno Lepri'] | ['cs.CV'] | Recently, diffusion models have made significant strides in synthesizing
realistic 2D human images based on provided text prompts. Building upon this,
researchers have extended 2D text-to-image diffusion models into the 3D domain
for generating human textures (UV Maps). However, some important problems about
UV Map Gen... | 2024-04-22T20:30:45Z | Accepted to ACMMM2024 | null | null | UVMap-ID: A Controllable and Personalized UV Map Generative Model | ['Weijie Wang', 'Jichao Zhang', 'Chang Liu', 'Xia Li', 'Xingqian Xu', 'Humphrey Shi', 'N. Sebe', 'Bruno Lepri'] | 2,024 | ACM Multimedia | 3 | 58 | ['Computer Science'] |
2,404.14619 | OpenELM: An Efficient Language Model Family with Open Training and
Inference Framework | ['Sachin Mehta', 'Mohammad Hossein Sekhavat', 'Qingqing Cao', 'Maxwell Horton', 'Yanzi Jin', 'Chenfan Sun', 'Iman Mirzadeh', 'Mahyar Najibi', 'Dmitry Belenko', 'Peter Zatloukal', 'Mohammad Rastegari'] | ['cs.CL', 'cs.AI', 'cs.LG'] | The reproducibility and transparency of large language models are crucial for
advancing open research, ensuring the trustworthiness of results, and enabling
investigations into data and model biases, as well as potential risks. To this
end, we release OpenELM, a state-of-the-art open language model. OpenELM uses a
laye... | 2024-04-22T23:12:03Z | Minor corrections | null | null | OpenELM: An Efficient Language Model Family with Open Training and Inference Framework | ['Sachin Mehta', 'M. Sekhavat', 'Qingqing Cao', 'Maxwell Horton', 'Yanzi Jin', 'Chenfan Sun', 'Iman Mirzadeh', 'Mahyar Najibi', 'Dmitry Belenko', 'Peter Zatloukal', 'Mohammad Rastegari'] | 2,024 | arXiv.org | 62 | 54 | ['Computer Science'] |
2,404.14779 | Med42 -- Evaluating Fine-Tuning Strategies for Medical LLMs:
Full-Parameter vs. Parameter-Efficient Approaches | ['Clément Christophe', 'Praveen K Kanithi', 'Prateek Munjal', 'Tathagata Raha', 'Nasir Hayat', 'Ronnie Rajan', 'Ahmed Al-Mahrooqi', 'Avani Gupta', 'Muhammad Umar Salman', 'Gurpreet Gosal', 'Bhargav Kanakiya', 'Charles Chen', 'Natalia Vassilieva', 'Boulbaba Ben Amor', 'Marco AF Pimentel', 'Shadab Khan'] | ['cs.CL'] | This study presents a comprehensive analysis and comparison of two
predominant fine-tuning methodologies - full-parameter fine-tuning and
parameter-efficient tuning - within the context of medical Large Language
Models (LLMs). We developed and refined a series of LLMs, based on the Llama-2
architecture, specifically de... | 2024-04-23T06:36:21Z | Published at AAAI 2024 Spring Symposium - Clinical Foundation Models | null | null | Med42 - Evaluating Fine-Tuning Strategies for Medical LLMs: Full-Parameter vs. Parameter-Efficient Approaches | ["Cl'ement Christophe", 'P. Kanithi', 'Prateek Munjal', 'Tathagata Raha', 'Nasir Hayat', 'Ronnie Rajan', 'Ahmed Al-Mahrooqi', 'Avani Gupta', 'Muhammad Umar Salman', 'Gurpreet Gosal', 'Bhargav Kanakiya', 'Charles Chen', 'N. Vassilieva', 'B. Amor', 'Marco A. F. Pimentel', 'Shadab Khan'] | 2,024 | arXiv.org | 35 | 36 | ['Computer Science'] |
2,404.14966 | Mamba3D: Enhancing Local Features for 3D Point Cloud Analysis via State
Space Model | ['Xu Han', 'Yuan Tang', 'Zhaoxuan Wang', 'Xianzhi Li'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Existing Transformer-based models for point cloud analysis suffer from
quadratic complexity, leading to compromised point cloud resolution and
information loss. In contrast, the newly proposed Mamba model, based on state
space models (SSM), outperforms Transformer in multiple areas with only linear
complexity. However,... | 2024-04-23T12:20:27Z | ACM MM 2024. Code and weights are available at
https://github.com/xhanxu/Mamba3D | null | null | null | null | null | null | null | null | null |
2,404.15159 | MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based
Mixture of Experts | ['Dengchun Li', 'Yingzi Ma', 'Naizheng Wang', 'Zhengmao Ye', 'Zhiyuan Cheng', 'Yinghao Tang', 'Yan Zhang', 'Lei Duan', 'Jie Zuo', 'Cal Yang', 'Mingjie Tang'] | ['cs.CL', 'cs.AI'] | Fine-tuning Large Language Models (LLMs) is a common practice to adapt
pre-trained models for specific applications. While methods like LoRA have
effectively addressed GPU memory constraints during fine-tuning, their
performance often falls short, especially in multi-task scenarios. In contrast,
Mixture-of-Expert (MoE)... | 2024-04-22T02:15:52Z | 18 pages, 5 figures | null | null | null | null | null | null | null | null | null |
2,404.15217 | Towards Large-Scale Training of Pathology Foundation Models | ['kaiko. ai', 'Nanne Aben', 'Edwin D. de Jong', 'Ioannis Gatopoulos', 'Nicolas Känzig', 'Mikhail Karasikov', 'Axel Lagré', 'Roman Moser', 'Joost van Doorn', 'Fei Tang'] | ['cs.CV', 'cs.LG'] | Driven by the recent advances in deep learning methods and, in particular, by
the development of modern self-supervised learning algorithms, increased
interest and efforts have been devoted to build foundation models (FMs) for
medical images. In this work, we present our scalable training pipeline for
large pathology i... | 2024-03-24T21:34:36Z | null | null | null | null | null | null | null | null | null | null |
2,404.15254 | UniMERNet: A Universal Network for Real-World Mathematical Expression
Recognition | ['Bin Wang', 'Zhuangcheng Gu', 'Guang Liang', 'Chao Xu', 'Bo Zhang', 'Botian Shi', 'Conghui He'] | ['cs.CV'] | The paper introduces the UniMER dataset, marking the first study on
Mathematical Expression Recognition (MER) targeting complex real-world
scenarios. The UniMER dataset includes a large-scale training set, UniMER-1M,
which offers unprecedented scale and diversity with one million training
instances to train high-qualit... | 2024-04-23T17:39:27Z | Project Website: https://github.com/opendatalab/UniMERNet | null | null | UniMERNet: A Universal Network for Real-World Mathematical Expression Recognition | ['Bin Wang', 'Zhuangcheng Gu', 'Chaochao Xu', 'Bo Zhang', 'Botian Shi', 'Conghui He'] | 2,024 | arXiv.org | 13 | 42 | ['Computer Science'] |
2,404.15264 | TalkingGaussian: Structure-Persistent 3D Talking Head Synthesis via
Gaussian Splatting | ['Jiahe Li', 'Jiawei Zhang', 'Xiao Bai', 'Jin Zheng', 'Xin Ning', 'Jun Zhou', 'Lin Gu'] | ['cs.CV'] | Radiance fields have demonstrated impressive performance in synthesizing
lifelike 3D talking heads. However, due to the difficulty in fitting steep
appearance changes, the prevailing paradigm that presents facial motions by
directly modifying point appearance may lead to distortions in dynamic regions.
To tackle this c... | 2024-04-23T17:55:07Z | Accepted at ECCV 2024. Project page:
https://fictionarry.github.io/TalkingGaussian/ | null | null | null | null | null | null | null | null | null |
2,404.15267 | From Parts to Whole: A Unified Reference Framework for Controllable
Human Image Generation | ['Zehuan Huang', 'Hongxing Fan', 'Lipeng Wang', 'Lu Sheng'] | ['cs.CV'] | Recent advancements in controllable human image generation have led to
zero-shot generation using structural signals (e.g., pose, depth) or facial
appearance. Yet, generating human images conditioned on multiple parts of human
appearance remains challenging. Addressing this, we introduce Parts2Whole, a
novel framework ... | 2024-04-23T17:56:08Z | null | null | null | null | null | null | null | null | null | null |
2,404.15275 | ID-Animator: Zero-Shot Identity-Preserving Human Video Generation | ['Xuanhua He', 'Quande Liu', 'Shengju Qian', 'Xin Wang', 'Tao Hu', 'Ke Cao', 'Keyu Yan', 'Jie Zhang'] | ['cs.CV'] | Generating high-fidelity human video with specified identities has attracted
significant attention in the content generation community. However, existing
techniques struggle to strike a balance between training efficiency and
identity preservation, either requiring tedious case-by-case fine-tuning or
usually missing id... | 2024-04-23T17:59:43Z | Project Page: https://id-animator.github.io/ | null | null | ID-Animator: Zero-Shot Identity-Preserving Human Video Generation | ['Xuanhua He', 'Quande Liu', 'Shengju Qian', 'Xin Wang', 'Tao Hu', 'Ke Cao', 'K. Yan', 'Man Zhou', 'Jie Zhang'] | 2,024 | arXiv.org | 50 | 46 | ['Computer Science'] |
2,404.16022 | PuLID: Pure and Lightning ID Customization via Contrastive Alignment | ['Zinan Guo', 'Yanze Wu', 'Zhuowei Chen', 'Lang Chen', 'Peng Zhang', 'Qian He'] | ['cs.CV'] | We propose Pure and Lightning ID customization (PuLID), a novel tuning-free
ID customization method for text-to-image generation. By incorporating a
Lightning T2I branch with a standard diffusion one, PuLID introduces both
contrastive alignment loss and accurate ID loss, minimizing disruption to the
original model and ... | 2024-04-24T17:55:33Z | NeurIPS 2024. Codes and models are available at
https://github.com/ToTheBeginning/PuLID | null | null | PuLID: Pure and Lightning ID Customization via Contrastive Alignment | ['Zinan Guo', 'Yanze Wu', 'Zhuowei Chen', 'Lang Chen', 'Qian He'] | 2,024 | Neural Information Processing Systems | 66 | 53 | ['Computer Science'] |
2,404.16035 | MaGGIe: Masked Guided Gradual Human Instance Matting | ['Chuong Huynh', 'Seoung Wug Oh', 'Abhinav Shrivastava', 'Joon-Young Lee'] | ['cs.CV', 'cs.AI'] | Human matting is a foundation task in image and video processing, where human
foreground pixels are extracted from the input. Prior works either improve the
accuracy by additional guidance or improve the temporal consistency of a single
instance across frames. We propose a new framework MaGGIe, Masked Guided
Gradual Hu... | 2024-04-24T17:59:53Z | CVPR 2024. Project link: https://maggie-matt.github.io | null | null | MaGGIe: Masked Guided Gradual Human Instance Matting | ['Chuong Huynh', 'Seoung Wug Oh', 'Abhinav Shrivastava', 'Joon-Young Lee'] | 2,024 | Computer Vision and Pattern Recognition | 8 | 57 | ['Computer Science'] |
2,404.16053 | Human Latency Conversational Turns for Spoken Avatar Systems | ['Derek Jacoby', 'Tianyi Zhang', 'Aanchan Mohan', 'Yvonne Coady'] | ['cs.HC', 'cs.AI', 'cs.CL'] | A problem with many current Large Language Model (LLM) driven spoken
dialogues is the response time. Some efforts such as Groq address this issue by
lightning fast processing of the LLM, but we know from the cognitive psychology
literature that in human-to-human dialogue often responses occur prior to the
speaker compl... | 2024-04-11T20:20:48Z | null | null | null | null | null | null | null | null | null | null |
2,404.16375 | List Items One by One: A New Data Source and Learning Paradigm for
Multimodal LLMs | ['An Yan', 'Zhengyuan Yang', 'Junda Wu', 'Wanrong Zhu', 'Jianwei Yang', 'Linjie Li', 'Kevin Lin', 'Jianfeng Wang', 'Julian McAuley', 'Jianfeng Gao', 'Lijuan Wang'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Set-of-Mark (SoM) Prompting unleashes the visual grounding capability of
GPT-4V, by enabling the model to associate visual objects with tags inserted on
the image. These tags, marked with alphanumerics, can be indexed via text
tokens for easy reference. Despite the extraordinary performance from GPT-4V,
we observe that... | 2024-04-25T07:29:17Z | published at COLM-2024 | null | null | null | null | null | null | null | null | null |
2,404.16621 | Hippocrates: An Open-Source Framework for Advancing Large Language
Models in Healthcare | ['Emre Can Acikgoz', 'Osman Batur İnce', 'Rayene Bench', 'Arda Anıl Boz', 'İlker Kesen', 'Aykut Erdem', 'Erkut Erdem'] | ['cs.LG', 'cs.AI', 'cs.CL'] | The integration of Large Language Models (LLMs) into healthcare promises to
transform medical diagnostics, research, and patient care. Yet, the progression
of medical LLMs faces obstacles such as complex training requirements, rigorous
evaluation demands, and the dominance of proprietary models that restrict
academic e... | 2024-04-25T14:06:37Z | null | null | null | Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare | ['Emre Can Acikgoz', 'Osman Batur .Ince', 'Rayene Bench', 'Arda Anil Boz', '.Ilker Kesen', 'Aykut Erdem', 'Erkut Erdem'] | 2,024 | arXiv.org | 10 | 44 | ['Computer Science'] |
2,404.16645 | Tele-FLM Technical Report | ['Xiang Li', 'Yiqun Yao', 'Xin Jiang', 'Xuezhi Fang', 'Chao Wang', 'Xinzhang Liu', 'Zihan Wang', 'Yu Zhao', 'Xin Wang', 'Yuyao Huang', 'Shuangyong Song', 'Yongxiang Li', 'Zheng Zhang', 'Bo Zhao', 'Aixin Sun', 'Yequan Wang', 'Zhongjiang He', 'Zhongyuan Wang', 'Xuelong Li', 'Tiejun Huang'] | ['cs.CL', 'cs.AI'] | Large language models (LLMs) have showcased profound capabilities in language
understanding and generation, facilitating a wide array of applications.
However, there is a notable paucity of detailed, open-sourced methodologies on
efficiently scaling LLMs beyond 50 billion parameters with minimum
trial-and-error cost an... | 2024-04-25T14:34:47Z | null | null | null | Tele-FLM Technical Report | ['Xiang Li', 'Yiqun Yao', 'Xin Jiang', 'Xuezhi Fang', 'Chao Wang', 'Xinzhan Liu', 'Zihan Wang', 'Yu Zhao', 'Xin Wang', 'Yuyao Huang', 'Shuangyong Song', 'Yongxiang Li', 'Zheng Zhang', 'Bo Zhao', 'Aixin Sun', 'Yequan Wang', 'Zhongjiang He', 'Zhongyuan Wang', 'Xuelong Li', 'Tiejun Huang'] | 2,024 | arXiv.org | 4 | 78 | ['Computer Science'] |
2,404.1671 | LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding | ['Mostafa Elhoushi', 'Akshat Shrivastava', 'Diana Liskovich', 'Basil Hosmer', 'Bram Wasti', 'Liangzhen Lai', 'Anas Mahmoud', 'Bilge Acun', 'Saurabh Agarwal', 'Ahmed Roman', 'Ahmed A Aly', 'Beidi Chen', 'Carole-Jean Wu'] | ['cs.CL', 'cs.AI', 'cs.LG'] | We present LayerSkip, an end-to-end solution to speed-up inference of large
language models (LLMs). First, during training we apply layer dropout, with low
dropout rates for earlier layers and higher dropout rates for later layers, and
an early exit loss where all transformer layers share the same exit. Second,
during ... | 2024-04-25T16:20:23Z | ACL 2024 | null | 10.18653/v1/2024.acl-long.681 | LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding | ['Mostafa Elhoushi', 'Akshat Shrivastava', 'Diana Liskovich', 'Basil Hosmer', 'Bram Wasti', 'Liangzhen Lai', 'Anas Mahmoud', 'Bilge Acun', 'Saurabh Agarwal', 'Ahmed Roman', 'Ahmed Aly', 'Beidi Chen', 'Carole-Jean Wu'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 110 | 62 | ['Computer Science'] |
2,404.16767 | REBEL: Reinforcement Learning via Regressing Relative Rewards | ['Zhaolin Gao', 'Jonathan D. Chang', 'Wenhao Zhan', 'Owen Oertell', 'Gokul Swamy', 'Kianté Brantley', 'Thorsten Joachims', 'J. Andrew Bagnell', 'Jason D. Lee', 'Wen Sun'] | ['cs.LG', 'cs.CL', 'cs.CV'] | While originally developed for continuous control problems, Proximal Policy
Optimization (PPO) has emerged as the work-horse of a variety of reinforcement
learning (RL) applications, including the fine-tuning of generative models.
Unfortunately, PPO requires multiple heuristics to enable stable convergence
(e.g. value ... | 2024-04-25T17:20:45Z | New experimental results on general chat | null | null | null | null | null | null | null | null | null |
2,404.16771 | ConsistentID: Portrait Generation with Multimodal Fine-Grained Identity
Preserving | ['Jiehui Huang', 'Xiao Dong', 'Wenhui Song', 'Zheng Chong', 'Zhenchao Tang', 'Jun Zhou', 'Yuhao Cheng', 'Long Chen', 'Hanhui Li', 'Yiqiang Yan', 'Shengcai Liao', 'Xiaodan Liang'] | ['cs.CV', 'cs.AI'] | Diffusion-based technologies have made significant strides, particularly in
personalized and customized facialgeneration. However, existing methods face
challenges in achieving high-fidelity and detailed identity (ID)consistency,
primarily due to insufficient fine-grained control over facial areas and the
lack of a com... | 2024-04-25T17:23:43Z | Project page: https://ssugarwh.github.io/consistentid.github.io/ | null | null | null | null | null | null | null | null | null |
2,404.16792 | Model Extrapolation Expedites Alignment | ['Chujie Zheng', 'Ziqi Wang', 'Heng Ji', 'Minlie Huang', 'Nanyun Peng'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Given the high computational cost of preference alignment training of large
language models (LLMs), exploring efficient methods to reduce the training
overhead remains an important and compelling research problem. Motivated by the
observation that alignment training typically involves only small parameter
changes witho... | 2024-04-25T17:39:50Z | ACL 2025 | null | null | null | null | null | null | null | null | null |
2,404.16811 | Make Your LLM Fully Utilize the Context | ['Shengnan An', 'Zexiong Ma', 'Zeqi Lin', 'Nanning Zheng', 'Jian-Guang Lou'] | ['cs.CL', 'cs.AI'] | While many contemporary large language models (LLMs) can process lengthy
input, they still struggle to fully utilize information within the long
context, known as the lost-in-the-middle challenge. We hypothesize that it
stems from insufficient explicit supervision during the long-context training,
which fails to emphas... | 2024-04-25T17:55:14Z | 19 pages, 7 figures, 3 tables, 9 examples | null | null | null | null | null | null | null | null | null |
2,404.16816 | IndicGenBench: A Multilingual Benchmark to Evaluate Generation
Capabilities of LLMs on Indic Languages | ['Harman Singh', 'Nitish Gupta', 'Shikhar Bharadwaj', 'Dinesh Tewari', 'Partha Talukdar'] | ['cs.CL'] | As large language models (LLMs) see increasing adoption across the globe, it
is imperative for LLMs to be representative of the linguistic diversity of the
world. India is a linguistically diverse country of 1.4 Billion people. To
facilitate research on multilingual LLM evaluation, we release IndicGenBench -
the larges... | 2024-04-25T17:57:36Z | ACL 2024 | null | null | IndicGenBench: A Multilingual Benchmark to Evaluate Generation Capabilities of LLMs on Indic Languages | ['Harman Singh', 'Nitish Gupta', 'Shikhar Bharadwaj', 'Dinesh Tewari', 'Partha Talukdar'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 28 | 46 | ['Computer Science'] |
2,404.16821 | How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal
Models with Open-Source Suites | ['Zhe Chen', 'Weiyun Wang', 'Hao Tian', 'Shenglong Ye', 'Zhangwei Gao', 'Erfei Cui', 'Wenwen Tong', 'Kongzhi Hu', 'Jiapeng Luo', 'Zheng Ma', 'Ji Ma', 'Jiaqi Wang', 'Xiaoyi Dong', 'Hang Yan', 'Hewei Guo', 'Conghui He', 'Botian Shi', 'Zhenjiang Jin', 'Chao Xu', 'Bin Wang', 'Xingjian Wei', 'Wei Li', 'Wenjian Zhang', 'Bo Z... | ['cs.CV'] | In this report, we introduce InternVL 1.5, an open-source multimodal large
language model (MLLM) to bridge the capability gap between open-source and
proprietary commercial models in multimodal understanding. We introduce three
simple improvements: (1) Strong Vision Encoder: we explored a continuous
learning strategy f... | 2024-04-25T17:59:19Z | Technical report | null | null | null | null | null | null | null | null | null |
2,404.16994 | PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video
Dense Captioning | ['Lin Xu', 'Yilin Zhao', 'Daquan Zhou', 'Zhijie Lin', 'See Kiong Ng', 'Jiashi Feng'] | ['cs.CV'] | Vision-language pre-training has significantly elevated performance across a
wide range of image-language applications. Yet, the pre-training process for
video-related tasks demands exceptionally large computational and data
resources, which hinders the progress of video-language models. This paper
investigates a strai... | 2024-04-25T19:29:55Z | null | null | null | null | null | null | null | null | null | null |
2,404.1714 | Small Language Models Need Strong Verifiers to Self-Correct Reasoning | ['Yunxiang Zhang', 'Muhammad Khalifa', 'Lajanugen Logeswaran', 'Jaekyeom Kim', 'Moontae Lee', 'Honglak Lee', 'Lu Wang'] | ['cs.CL'] | Self-correction has emerged as a promising solution to boost the reasoning
performance of large language models (LLMs), where LLMs refine their solutions
using self-generated critiques that pinpoint the errors. This work explores
whether small (<= 13B) language models (LMs) have the ability of
self-correction on reason... | 2024-04-26T03:41:28Z | ACL Findings 2024 - Camera Ready | null | null | null | null | null | null | null | null | null |
2,404.17336 | Introducing cosmosGPT: Monolingual Training for Turkish Language Models | ['H. Toprak Kesgin', 'M. Kaan Yuce', 'Eren Dogan', 'M. Egemen Uzun', 'Atahan Uz', 'H. Emre Seyrek', 'Ahmed Zeer', 'M. Fatih Amasyali'] | ['cs.CL', 'cs.AI'] | The number of open source language models that can produce Turkish is
increasing day by day, as in other languages. In order to create the basic
versions of such models, the training of multilingual models is usually
continued with Turkish corpora. The alternative is to train the model with only
Turkish corpora. In thi... | 2024-04-26T11:34:11Z | null | null | null | null | null | null | null | null | null | null |
2,404.1736 | UniRGB-IR: A Unified Framework for Visible-Infrared Semantic Tasks via
Adapter Tuning | ['Maoxun Yuan', 'Bo Cui', 'Tianyi Zhao', 'Jiayi Wang', 'Shan Fu', 'Xue Yang', 'Xingxing Wei'] | ['cs.CV'] | Semantic analysis on visible (RGB) and infrared (IR) images has gained
significant attention due to their enhanced accuracy and robustness under
challenging conditions including low-illumination and adverse weather. However,
due to the lack of pre-trained foundation models on the large-scale infrared
image datasets, ex... | 2024-04-26T12:21:57Z | null | null | null | null | null | null | null | null | null | null |
2,404.17733 | Building a Large Japanese Web Corpus for Large Language Models | ['Naoaki Okazaki', 'Kakeru Hattori', 'Hirai Shota', 'Hiroki Iida', 'Masanari Ohi', 'Kazuki Fujii', 'Taishi Nakamura', 'Mengsay Loem', 'Rio Yokota', 'Sakae Mizuki'] | ['cs.CL', 'cs.AI'] | Open Japanese large language models (LLMs) have been trained on the Japanese
portions of corpora such as CC-100, mC4, and OSCAR. However, these corpora were
not created for the quality of Japanese texts. This study builds a large
Japanese web corpus by extracting and refining text from the Common Crawl
archive (21 snap... | 2024-04-27T00:02:45Z | 17 pages | null | null | Building a Large Japanese Web Corpus for Large Language Models | ['Naoaki Okazaki', 'Kakeru Hattori', 'Hirai Shota', 'Hiroki Iida', 'Masanari Ohi', 'Kazuki Fujii', 'Taishi Nakamura', 'Mengsay Loem', 'Rio Yokota', 'Sakae Mizuki'] | 2,024 | arXiv.org | 7 | 48 | ['Computer Science'] |
2,404.1779 | Continual Pre-Training for Cross-Lingual LLM Adaptation: Enhancing
Japanese Language Capabilities | ['Kazuki Fujii', 'Taishi Nakamura', 'Mengsay Loem', 'Hiroki Iida', 'Masanari Ohi', 'Kakeru Hattori', 'Hirai Shota', 'Sakae Mizuki', 'Rio Yokota', 'Naoaki Okazaki'] | ['cs.CL', 'cs.AI'] | Cross-lingual continual pre-training of large language models (LLMs)
initially trained on English corpus allows us to leverage the vast amount of
English language resources and reduce the pre-training cost. In this study, we
constructed Swallow, an LLM with enhanced Japanese capability, by extending the
vocabulary of L... | 2024-04-27T06:07:55Z | null | null | null | null | null | null | null | null | null | null |
2,404.18212 | Paint by Inpaint: Learning to Add Image Objects by Removing Them First | ['Navve Wasserman', 'Noam Rotstein', 'Roy Ganz', 'Ron Kimmel'] | ['cs.CV', 'cs.AI'] | Image editing has advanced significantly with the introduction of
text-conditioned diffusion models. Despite this progress, seamlessly adding
objects to images based on textual instructions without requiring user-provided
input masks remains a challenge. We address this by leveraging the insight that
removing objects (... | 2024-04-28T15:07:53Z | null | null | null | Paint by Inpaint: Learning to Add Image Objects by Removing Them First | ['Navve Wasserman', 'Noam Rotstein', 'Roy Ganz', 'Ron Kimmel'] | 2,024 | arXiv.org | 16 | 71 | ['Computer Science'] |
2,404.18443 | BMRetriever: Tuning Large Language Models as Better Biomedical Text
Retrievers | ['Ran Xu', 'Wenqi Shi', 'Yue Yu', 'Yuchen Zhuang', 'Yanqiao Zhu', 'May D. Wang', 'Joyce C. Ho', 'Chao Zhang', 'Carl Yang'] | ['cs.CL', 'cs.AI', 'cs.IR', 'q-bio.QM'] | Developing effective biomedical retrieval models is important for excelling
at knowledge-intensive biomedical tasks but still challenging due to the
deficiency of sufficient publicly annotated biomedical data and computational
resources. We present BMRetriever, a series of dense retrievers for enhancing
biomedical retr... | 2024-04-29T05:40:08Z | Accepted to EMNLP 2024. The model and data are uploaded to
\url{https://github.com/ritaranx/BMRetriever} | EMNLP 2024 | null | BMRetriever: Tuning Large Language Models as Better Biomedical Text Retrievers | ['Ran Xu', 'Wenqi Shi', 'Yue Yu', 'Yuchen Zhuang', 'Yanqiao Zhu', 'M. D. Wang', 'Joyce C. Ho', 'Chao Zhang', 'Carl Yang'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 25 | 98 | ['Computer Science', 'Biology'] |
2,404.18585 | FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table
Question Answering | ['Wei Zhou', 'Mohsen Mesgar', 'Heike Adel', 'Annemarie Friedrich'] | ['cs.CL'] | Table Question Answering (TQA) aims at composing an answer to a question
based on tabular data. While prior research has shown that TQA models lack
robustness, understanding the underlying cause and nature of this issue remains
predominantly unclear, posing a significant obstacle to the development of
robust TQA system... | 2024-04-29T10:55:08Z | Accepted at NAACL 2024 | null | null | FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering | ['Wei Zhou', 'Mohsen Mesgar', 'Heike Adel', 'Annemarie Friedrich'] | 2,024 | North American Chapter of the Association for Computational Linguistics | 9 | 29 | ['Computer Science'] |
2,404.18591 | FashionSD-X: Multimodal Fashion Garment Synthesis using Latent Diffusion | ['Abhishek Kumar Singh', 'Ioannis Patras'] | ['cs.CV', 'cs.AI'] | The rapid evolution of the fashion industry increasingly intersects with
technological advancements, particularly through the integration of generative
AI. This study introduces a novel generative pipeline designed to transform the
fashion design process by employing latent diffusion models. Utilizing
ControlNet and Lo... | 2024-04-26T14:59:42Z | 9 pages, 8 figures | null | null | FashionSD-X: Multimodal Fashion Garment Synthesis using Latent Diffusion | ['Abhishek Kumar Singh', 'Ioannis Patras'] | 2,024 | arXiv.org | 4 | 0 | ['Computer Science'] |
2,404.18796 | Replacing Judges with Juries: Evaluating LLM Generations with a Panel of
Diverse Models | ['Pat Verga', 'Sebastian Hofstatter', 'Sophia Althammer', 'Yixuan Su', 'Aleksandra Piktus', 'Arkady Arkhangorodsky', 'Minjie Xu', 'Naomi White', 'Patrick Lewis'] | ['cs.CL', 'cs.AI'] | As Large Language Models (LLMs) have become more advanced, they have outpaced
our abilities to accurately evaluate their quality. Not only is finding data to
adequately probe particular model properties difficult, but evaluating the
correctness of a model's freeform generation alone is a challenge. To address
this, man... | 2024-04-29T15:33:23Z | null | null | null | null | null | null | null | null | null | null |
2,404.18824 | Benchmarking Benchmark Leakage in Large Language Models | ['Ruijie Xu', 'Zengzhi Wang', 'Run-Ze Fan', 'Pengfei Liu'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Amid the expanding use of pre-training data, the phenomenon of benchmark
dataset leakage has become increasingly prominent, exacerbated by opaque
training processes and the often undisclosed inclusion of supervised data in
contemporary Large Language Models (LLMs). This issue skews benchmark
effectiveness and fosters p... | 2024-04-29T16:05:36Z | 30 pages; Homepage: https://gair-nlp.github.io/benbench | null | null | Benchmarking Benchmark Leakage in Large Language Models | ['Ruijie Xu', 'Zengzhi Wang', 'Run-Ze Fan', 'Pengfei Liu'] | 2,024 | arXiv.org | 54 | 63 | ['Computer Science'] |
2,404.18873 | OpenStreetView-5M: The Many Roads to Global Visual Geolocation | ['Guillaume Astruc', 'Nicolas Dufour', 'Ioannis Siglidis', 'Constantin Aronssohn', 'Nacim Bouia', 'Stephanie Fu', 'Romain Loiseau', 'Van Nguyen Nguyen', 'Charles Raude', 'Elliot Vincent', 'Lintao XU', 'Hongyu Zhou', 'Loic Landrieu'] | ['cs.CV', 'cs.AI'] | Determining the location of an image anywhere on Earth is a complex visual
task, which makes it particularly relevant for evaluating computer vision
algorithms. Yet, the absence of standard, large-scale, open-access datasets
with reliably localizable images has limited its potential. To address this
issue, we introduce... | 2024-04-29T17:06:44Z | CVPR 2024 | null | null | null | null | null | null | null | null | null |
2,404.18896 | Overcoming Knowledge Barriers: Online Imitation Learning from Visual
Observation with Pretrained World Models | ['Xingyuan Zhang', 'Philip Becker-Ehmck', 'Patrick van der Smagt', 'Maximilian Karl'] | ['cs.LG'] | Pretraining and finetuning models has become increasingly popular in
decision-making. But there are still serious impediments in Imitation Learning
from Observation (ILfO) with pretrained models. This study identifies two
primary obstacles: the Embodiment Knowledge Barrier (EKB) and the Demonstration
Knowledge Barrier ... | 2024-04-29T17:33:52Z | Accepted at TMLR | null | null | null | null | null | null | null | null | null |
2,404.19205 | TableVQA-Bench: A Visual Question Answering Benchmark on Multiple Table
Domains | ['Yoonsik Kim', 'Moonbin Yim', 'Ka Yeon Song'] | ['cs.CV', 'cs.AI'] | In this paper, we establish a benchmark for table visual question answering,
referred to as the TableVQA-Bench, derived from pre-existing table
question-answering (QA) and table structure recognition datasets. It is
important to note that existing datasets have not incorporated images or QA
pairs, which are two crucial... | 2024-04-30T02:05:18Z | Technical Report | null | null | null | null | null | null | null | null | null |
2,404.19296 | Octopus v4: Graph of language models | ['Wei Chen', 'Zhiyuan Li'] | ['cs.CL'] | Language models have been effective in a wide range of applications, yet the
most sophisticated models are often proprietary. For example, GPT-4 by OpenAI
and various models by Anthropic are expensive and consume substantial energy.
In contrast, the open-source community has produced competitive models, like
Llama3. Fu... | 2024-04-30T06:55:45Z | null | null | null | null | null | null | null | null | null | null |
2,404.19737 | Better & Faster Large Language Models via Multi-token Prediction | ['Fabian Gloeckle', 'Badr Youbi Idrissi', 'Baptiste Rozière', 'David Lopez-Paz', 'Gabriel Synnaeve'] | ['cs.CL'] | Large language models such as GPT and Llama are trained with a next-token
prediction loss. In this work, we suggest that training language models to
predict multiple future tokens at once results in higher sample efficiency.
More specifically, at each position in the training corpus, we ask the model to
predict the fol... | 2024-04-30T17:33:57Z | null | null | null | Better & Faster Large Language Models via Multi-token Prediction | ['Fabian Gloeckle', 'Badr Youbi Idrissi', 'Baptiste Rozière', 'David Lopez-Paz', 'Gabriele Synnaeve'] | 2,024 | International Conference on Machine Learning | 121 | 54 | ['Computer Science'] |
2,404.19756 | KAN: Kolmogorov-Arnold Networks | ['Ziming Liu', 'Yixuan Wang', 'Sachin Vaidya', 'Fabian Ruehle', 'James Halverson', 'Marin Soljačić', 'Thomas Y. Hou', 'Max Tegmark'] | ['cs.LG', 'cond-mat.dis-nn', 'cs.AI', 'stat.ML'] | Inspired by the Kolmogorov-Arnold representation theorem, we propose
Kolmogorov-Arnold Networks (KANs) as promising alternatives to Multi-Layer
Perceptrons (MLPs). While MLPs have fixed activation functions on nodes
("neurons"), KANs have learnable activation functions on edges ("weights").
KANs have no linear weights ... | 2024-04-30T17:58:29Z | Accepted by International Conference on Learning Representations
(ICLR) 2025 (conference version: https://openreview.net/forum?id=Ozo7qJ5vZi).
Codes are available at https://github.com/KindXiaoming/pykan | null | null | KAN: Kolmogorov-Arnold Networks | ['Ziming Liu', 'Yixuan Wang', 'Sachin Vaidya', 'Fabian Ruehle', 'James Halverson', 'Marin Soljacic', 'Thomas Y. Hou', 'Max Tegmark'] | 2,024 | International Conference on Learning Representations | 602 | 151 | ['Computer Science', 'Physics', 'Mathematics'] |
2,405.00134 | Transforming Dutch: Debiasing Dutch Coreference Resolution Systems for
Non-binary Pronouns | ['Goya van Boven', 'Yupei Du', 'Dong Nguyen'] | ['cs.CL', 'cs.AI', 'I.2.7'] | Gender-neutral pronouns are increasingly being introduced across Western
languages. Recent evaluations have however demonstrated that English NLP
systems are unable to correctly process gender-neutral pronouns, with the risk
of erasing and misgendering non-binary individuals. This paper examines a Dutch
coreference res... | 2024-04-30T18:31:19Z | 22 pages, 2 figures. Accepted at the 2024 ACM Conference on Fairness,
Accountability, and Transparency (FAccT '24) | null | 10.1145/3630106.3659049 | Transforming Dutch: Debiasing Dutch Coreference Resolution Systems for Non-binary Pronouns | ['Goya van Boven', 'Yupei Du', 'Dong Nguyen'] | 2,024 | Conference on Fairness, Accountability and Transparency | 1 | 54 | ['Computer Science'] |
2,405.00145 | GUing: A Mobile GUI Search Engine using a Vision-Language Model | ['Jialiang Wei', 'Anne-Lise Courbis', 'Thomas Lambolais', 'Binbin Xu', 'Pierre Louis Bernard', 'Gérard Dray', 'Walid Maalej'] | ['cs.SE', 'cs.CV'] | Graphical User Interfaces (GUIs) are central to app development projects. App
developers may use the GUIs of other apps as a means of requirements refinement
and rapid prototyping or as a source of inspiration for designing and improving
their own apps. Recent research has thus suggested retrieving relevant GUI
designs... | 2024-04-30T18:42:18Z | Accepted to ACM Transactions on Software Engineering and Methodology
(TOSEM) | null | 10.1145/3702993 | null | null | null | null | null | null | null |
2,405.002 | In-Context Learning with Long-Context Models: An In-Depth Exploration | ['Amanda Bertsch', 'Maor Ivgi', 'Emily Xiao', 'Uri Alon', 'Jonathan Berant', 'Matthew R. Gormley', 'Graham Neubig'] | ['cs.CL'] | As model context lengths continue to increase, the number of demonstrations
that can be provided in-context approaches the size of entire training
datasets. We study the behavior of in-context learning (ICL) at this extreme
scale on multiple datasets and models. We show that, for many datasets with
large label spaces, ... | 2024-04-30T21:06:52Z | 32 pages; NAACL 2025 camera-ready | null | null | In-Context Learning with Long-Context Models: An In-Depth Exploration | ['Amanda Bertsch', 'Maor Ivgi', 'Uri Alon', 'Jonathan Berant', 'Matthew R. Gormley', 'Graham Neubig'] | 2,024 | North American Chapter of the Association for Computational Linguistics | 79 | 69 | ['Computer Science'] |
2,405.00208 | A Primer on the Inner Workings of Transformer-based Language Models | ['Javier Ferrando', 'Gabriele Sarti', 'Arianna Bisazza', 'Marta R. Costa-jussà'] | ['cs.CL'] | The rapid progress of research aimed at interpreting the inner workings of
advanced language models has highlighted a need for contextualizing the
insights gained from years of work in this area. This primer provides a concise
technical introduction to the current techniques used to interpret the inner
workings of Tran... | 2024-04-30T21:20:17Z | null | null | null | A Primer on the Inner Workings of Transformer-based Language Models | ['Javier Ferrando', 'Gabriele Sarti', 'Arianna Bisazza', 'M. Costa-jussà'] | 2,024 | arXiv.org | 50 | 0 | ['Computer Science'] |
2,405.00332 | A Careful Examination of Large Language Model Performance on Grade
School Arithmetic | ['Hugh Zhang', 'Jeff Da', 'Dean Lee', 'Vaughn Robinson', 'Catherine Wu', 'Will Song', 'Tiffany Zhao', 'Pranav Raja', 'Charlotte Zhuang', 'Dylan Slack', 'Qin Lyu', 'Sean Hendryx', 'Russell Kaplan', 'Michele Lunati', 'Summer Yue'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Large language models (LLMs) have achieved impressive success on many
benchmarks for mathematical reasoning. However, there is growing concern that
some of this performance actually reflects dataset contamination, where data
closely resembling benchmark questions leaks into the training data, instead of
true reasoning ... | 2024-05-01T05:52:05Z | 2024 NeurIPS Camera Ready (Datasets and Benchmarks Track) | null | null | null | null | null | null | null | null | null |
2,405.00675 | Self-Play Preference Optimization for Language Model Alignment | ['Yue Wu', 'Zhiqing Sun', 'Huizhuo Yuan', 'Kaixuan Ji', 'Yiming Yang', 'Quanquan Gu'] | ['cs.LG', 'cs.AI', 'cs.CL', 'stat.ML'] | Standard reinforcement learning from human feedback (RLHF) approaches relying
on parametric models like the Bradley-Terry model fall short in capturing the
intransitivity and irrationality in human preferences. Recent advancements
suggest that directly working with preference probabilities can yield a more
accurate ref... | 2024-05-01T17:59:20Z | 27 pages, 4 figures, 5 tables | null | null | Self-Play Preference Optimization for Language Model Alignment | ['Yue Wu', 'Zhiqing Sun', 'Huizhuo Yuan', 'Kaixuan Ji', 'Yiming Yang', 'Quanquan Gu'] | 2,024 | International Conference on Learning Representations | 145 | 59 | ['Computer Science', 'Mathematics'] |
2,405.0074 | Modeling Caption Diversity in Contrastive Vision-Language Pretraining | ['Samuel Lavoie', 'Polina Kirichenko', 'Mark Ibrahim', 'Mahmoud Assran', 'Andrew Gordon Wilson', 'Aaron Courville', 'Nicolas Ballas'] | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG'] | There are a thousand ways to caption an image. Contrastive Language
Pretraining (CLIP) on the other hand, works by mapping an image and its caption
to a single vector -- limiting how well CLIP-like models can represent the
diverse ways to describe an image. In this work, we introduce Llip, Latent
Language Image Pretrai... | 2024-04-30T01:19:18Z | 14 pages, 8 figures, 7 tables, to be published at ICML2024 | null | null | Modeling Caption Diversity in Contrastive Vision-Language Pretraining | ['Samuel Lavoie', 'P. Kirichenko', 'Mark Ibrahim', 'Mahmoud Assran', 'Andrew Gordon Wilson', 'Aaron Courville', 'Nicolas Ballas'] | 2,024 | International Conference on Machine Learning | 23 | 63 | ['Computer Science'] |
2,405.00828 | WIBA: What Is Being Argued? A Comprehensive Approach to Argument Mining | ['Arman Irani', 'Ju Yeon Park', 'Kevin Esterling', 'Michalis Faloutsos'] | ['cs.CL'] | We propose WIBA, a novel framework and suite of methods that enable the
comprehensive understanding of "What Is Being Argued" across contexts. Our
approach develops a comprehensive framework that detects: (a) the existence,
(b) the topic, and (c) the stance of an argument, correctly accounting for the
logical dependenc... | 2024-05-01T19:31:13Z | 8 pages, 2 figures, submitted to The 16th International Conference on
Advances in Social Networks Analysis and Mining (ASONAM) '24 | null | null | null | null | null | null | null | null | null |
2,405.00934 | Benchmarking Representations for Speech, Music, and Acoustic Events | ['Moreno La Quatra', 'Alkis Koudounas', 'Lorenzo Vaiani', 'Elena Baralis', 'Luca Cagliero', 'Paolo Garza', 'Sabato Marco Siniscalchi'] | ['eess.AS', 'cs.LG', 'cs.SD'] | Limited diversity in standardized benchmarks for evaluating audio
representation learning (ARL) methods may hinder systematic comparison of
current methods' capabilities. We present ARCH, a comprehensive benchmark for
evaluating ARL methods on diverse audio classification domains, covering
acoustic events, music, and s... | 2024-05-02T01:24:53Z | null | null | 10.1109/ICASSPW62465.2024.10625960 | Benchmarking Representations for Speech, Music, and Acoustic Events | ['Moreno La Quatra', 'Alkis Koudounas', 'Lorenzo Vaiani', 'Elena Baralis', 'Luca Cagliero', 'Paolo Garza', 'Sabato Marco Siniscalchi'] | 2,024 | 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW) | 13 | 36 | ['Computer Science', 'Engineering'] |
2,405.00977 | Distillation for Multilingual Information Retrieval | ['Eugene Yang', 'Dawn Lawrie', 'James Mayfield'] | ['cs.IR', 'cs.CL'] | Recent work in cross-language information retrieval (CLIR), where queries and
documents are in different languages, has shown the benefit of the
Translate-Distill framework that trains a cross-language neural dual-encoder
model using translation and distillation. However, Translate-Distill only
supports a single docume... | 2024-05-02T03:30:03Z | 6 pages, 1 figure, accepted at SIGIR 2024 as short paper | null | 10.1145/3626772.3657955 | null | null | null | null | null | null | null |
2,405.00997 | The IgboAPI Dataset: Empowering Igbo Language Technologies through
Multi-dialectal Enrichment | ['Chris Chinenye Emezue', 'Ifeoma Okoh', 'Chinedu Mbonu', 'Chiamaka Chukwuneke', 'Daisy Lal', 'Ignatius Ezeani', 'Paul Rayson', 'Ijemma Onwuzulike', 'Chukwuma Okeke', 'Gerald Nweya', 'Bright Ogbonna', 'Chukwuebuka Oraegbunam', 'Esther Chidinma Awo-Ndubuisi', 'Akudo Amarachukwu Osuagwu', 'Obioha Nmezi'] | ['cs.CL'] | The Igbo language is facing a risk of becoming endangered, as indicated by a
2025 UNESCO study. This highlights the need to develop language technologies
for Igbo to foster communication, learning and preservation. To create robust,
impactful, and widely adopted language technologies for Igbo, it is essential
to incorp... | 2024-05-02T04:27:35Z | Accepted to the LREC-COLING 2024 conference | null | null | null | null | null | null | null | null | null |
2,405.01413 | MiniGPT-3D: Efficiently Aligning 3D Point Clouds with Large Language
Models using 2D Priors | ['Yuan Tang', 'Xu Han', 'Xianzhi Li', 'Qiao Yu', 'Yixue Hao', 'Long Hu', 'Min Chen'] | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG'] | Large 2D vision-language models (2D-LLMs) have gained significant attention
by bridging Large Language Models (LLMs) with images using a simple projector.
Inspired by their success, large 3D point cloud-language models (3D-LLMs) also
integrate point clouds into LLMs. However, directly aligning point clouds with
LLM req... | 2024-05-02T16:04:30Z | 17 pages, 9 figures | null | null | MiniGPT-3D: Efficiently Aligning 3D Point Clouds with Large Language Models using 2D Priors | ['Yuan Tang', 'Xu Han', 'Xianzhi Li', 'Qiao Yu', 'Yixue Hao', 'Long Hu', 'Min Chen'] | 2,024 | ACM Multimedia | 20 | 60 | ['Computer Science'] |
2,405.0147 | WildChat: 1M ChatGPT Interaction Logs in the Wild | ['Wenting Zhao', 'Xiang Ren', 'Jack Hessel', 'Claire Cardie', 'Yejin Choi', 'Yuntian Deng'] | ['cs.CL'] | Chatbots such as GPT-4 and ChatGPT are now serving millions of users. Despite
their widespread use, there remains a lack of public datasets showcasing how
these tools are used by a population of users in practice. To bridge this gap,
we offered free access to ChatGPT for online users in exchange for their
affirmative, ... | 2024-05-02T17:00:02Z | accepted by ICLR 2024 | null | null | null | null | null | null | null | null | null |
2,405.01474 | Understanding Figurative Meaning through Explainable Visual Entailment | ['Arkadiy Saakyan', 'Shreyas Kulkarni', 'Tuhin Chakrabarty', 'Smaranda Muresan'] | ['cs.CL', 'cs.AI', 'cs.CV'] | Large Vision-Language Models (VLMs) have demonstrated strong capabilities in
tasks requiring a fine-grained understanding of literal meaning in images and
text, such as visual question-answering or visual entailment. However, there
has been little exploration of the capabilities of these models when presented
with imag... | 2024-05-02T17:07:25Z | NAACL 2025 Main Conference | null | null | Understanding Figurative Meaning through Explainable Visual Entailment | ['Arkadiy Saakyan', 'Shreyas Kulkarni', 'Tuhin Chakrabarty', 'S. Muresan'] | 2,024 | North American Chapter of the Association for Computational Linguistics | 3 | 65 | ['Computer Science'] |
2,405.01481 | NeMo-Aligner: Scalable Toolkit for Efficient Model Alignment | ['Gerald Shen', 'Zhilin Wang', 'Olivier Delalleau', 'Jiaqi Zeng', 'Yi Dong', 'Daniel Egert', 'Shengyang Sun', 'Jimmy Zhang', 'Sahil Jain', 'Ali Taghibakhshi', 'Markel Sanz Ausin', 'Ashwath Aithal', 'Oleksii Kuchaiev'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Aligning Large Language Models (LLMs) with human values and preferences is
essential for making them helpful and safe. However, building efficient tools
to perform alignment can be challenging, especially for the largest and most
competent LLMs which often contain tens or hundreds of billions of parameters.
We create N... | 2024-05-02T17:13:40Z | 16 pages, 4 figures, Accepted to COLM 2024 | null | null | NeMo-Aligner: Scalable Toolkit for Efficient Model Alignment | ['Gerald Shen', 'Zhilin Wang', 'Olivier Delalleau', 'Jiaqi Zeng', 'Yi Dong', 'Daniel Egert', 'Shengyang Sun', 'Jimmy Zhang', 'Sahil Jain', 'Ali Taghibakhshi', 'Markel Sanz Ausin', 'Ashwath Aithal', 'Oleksii Kuchaiev'] | 2,024 | arXiv.org | 15 | 38 | ['Computer Science'] |
2,405.01483 | MANTIS: Interleaved Multi-Image Instruction Tuning | ['Dongfu Jiang', 'Xuan He', 'Huaye Zeng', 'Cong Wei', 'Max Ku', 'Qian Liu', 'Wenhu Chen'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Large multimodal models (LMMs) have shown great results in single-image
vision language tasks. However, their abilities to solve multi-image visual
language tasks is yet to be improved. The existing LMMs like OpenFlamingo,
Emu2, and Idefics gain their multi-image ability through pre-training on
hundreds of millions of ... | 2024-05-02T17:14:57Z | 13 pages, 3 figures, 13 tables | Transactions on Machine Learning Research 2024 | null | MANTIS: Interleaved Multi-Image Instruction Tuning | ['Dongfu Jiang', 'Xuan He', 'Huaye Zeng', 'Cong Wei', 'Max W.F. Ku', 'Qian Liu', 'Wenhu Chen'] | 2,024 | Trans. Mach. Learn. Res. | 125 | 72 | ['Computer Science'] |
2,405.01535 | Prometheus 2: An Open Source Language Model Specialized in Evaluating
Other Language Models | ['Seungone Kim', 'Juyoung Suk', 'Shayne Longpre', 'Bill Yuchen Lin', 'Jamin Shin', 'Sean Welleck', 'Graham Neubig', 'Moontae Lee', 'Kyungjae Lee', 'Minjoon Seo'] | ['cs.CL'] | Proprietary LMs such as GPT-4 are often employed to assess the quality of
responses from various LMs. However, concerns including transparency,
controllability, and affordability strongly motivate the development of
open-source LMs specialized in evaluations. On the other hand, existing open
evaluator LMs exhibit criti... | 2024-05-02T17:59:35Z | EMNLP 2024 (Main Conference) | null | null | Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models | ['Seungone Kim', 'Juyoung Suk', 'Shayne Longpre', 'Bill Yuchen Lin', 'Jamin Shin', 'S. Welleck', 'Graham Neubig', 'Moontae Lee', 'Kyungjae Lee', 'Minjoon Seo'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 205 | 45 | ['Computer Science'] |
2,405.01886 | Aloe: A Family of Fine-tuned Open Healthcare LLMs | ['Ashwin Kumar Gururajan', 'Enrique Lopez-Cuena', 'Jordi Bayarri-Planas', 'Adrian Tormos', 'Daniel Hinjos', 'Pablo Bernabeu-Perez', 'Anna Arias-Duart', 'Pablo Agustin Martin-Torres', 'Lucia Urcelay-Ganzabal', 'Marta Gonzalez-Mallo', 'Sergio Alvarez-Napagao', 'Eduard Ayguadé-Parra', 'Ulises Cortés Dario Garcia-Gasulla'] | ['cs.CL', 'cs.AI'] | As the capabilities of Large Language Models (LLMs) in healthcare and
medicine continue to advance, there is a growing need for competitive
open-source models that can safeguard public interest. With the increasing
availability of highly competitive open base models, the impact of continued
pre-training is increasingly... | 2024-05-03T07:14:07Z | Five appendix | null | null | null | null | null | null | null | null | null |
2,405.01924 | Semi-Parametric Retrieval via Binary Bag-of-Tokens Index | ['Jiawei Zhou', 'Li Dong', 'Furu Wei', 'Lei Chen'] | ['cs.CL', 'cs.AI', 'cs.IR'] | Information retrieval has transitioned from standalone systems into essential
components across broader applications, with indexing efficiency,
cost-effectiveness, and freshness becoming increasingly critical yet often
overlooked. In this paper, we introduce SemI-parametric Disentangled Retrieval
(SiDR), a bi-encoder r... | 2024-05-03T08:34:13Z | null | null | null | null | null | null | null | null | null | null |
2,405.02246 | What matters when building vision-language models? | ['Hugo Laurençon', 'Léo Tronchon', 'Matthieu Cord', 'Victor Sanh'] | ['cs.CV', 'cs.AI'] | The growing interest in vision-language models (VLMs) has been driven by
improvements in large language models and vision transformers. Despite the
abundance of literature on this subject, we observe that critical decisions
regarding the design of VLMs are often not justified. We argue that these
unsupported decisions ... | 2024-05-03T17:00:00Z | null | null | null | What matters when building vision-language models? | ['Hugo Laurençon', 'Léo Tronchon', 'Matthieu Cord', 'Victor Sanh'] | 2,024 | Neural Information Processing Systems | 177 | 156 | ['Computer Science'] |
2,405.02296 | Möbius Transform for Mitigating Perspective Distortions in
Representation Learning | ['Prakash Chandra Chhipa', 'Meenakshi Subhash Chippa', 'Kanjar De', 'Rajkumar Saini', 'Marcus Liwicki', 'Mubarak Shah'] | ['cs.CV'] | Perspective distortion (PD) causes unprecedented changes in shape, size,
orientation, angles, and other spatial relationships of visual concepts in
images. Precisely estimating camera intrinsic and extrinsic parameters is a
challenging task that prevents synthesizing perspective distortion.
Non-availability of dedicate... | 2024-03-07T15:39:00Z | Accepted to European Conference on Computer Vision(ECCV2024). project
page- https://prakashchhipa.github.io/projects/mpd | null | null | Möbius Transform for Mitigating Perspective Distortions in Representation Learning | ['Prakash Chandra Chhipa', 'Meenakshi Subhash Chippa', 'Kanjar De', 'Rajkumar Saini', 'Marcus Liwicki', 'Mubarak Shah'] | 2,024 | European Conference on Computer Vision | 1 | 64 | ['Computer Science'] |
2,405.0273 | U-DiTs: Downsample Tokens in U-Shaped Diffusion Transformers | ['Yuchuan Tian', 'Zhijun Tu', 'Hanting Chen', 'Jie Hu', 'Chao Xu', 'Yunhe Wang'] | ['cs.CV'] | Diffusion Transformers (DiTs) introduce the transformer architecture to
diffusion tasks for latent-space image generation. With an isotropic
architecture that chains a series of transformer blocks, DiTs demonstrate
competitive performance and good scalability; but meanwhile, the abandonment of
U-Net by DiTs and their f... | 2024-05-04T18:27:29Z | 12 pages, 5 figures | NeurIPS 2024 Poster | null | null | null | null | null | null | null | null |
2,405.03162 | Advancing Multimodal Medical Capabilities of Gemini | ['Lin Yang', 'Shawn Xu', 'Andrew Sellergren', 'Timo Kohlberger', 'Yuchen Zhou', 'Ira Ktena', 'Atilla Kiraly', 'Faruk Ahmed', 'Farhad Hormozdiari', 'Tiam Jaroensri', 'Eric Wang', 'Ellery Wulczyn', 'Fayaz Jamil', 'Theo Guidroz', 'Chuck Lau', 'Siyuan Qiao', 'Yun Liu', 'Akshay Goel', 'Kendall Park', 'Arnav Agharwal', 'Nick... | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG'] | Many clinical tasks require an understanding of specialized data, such as
medical images and genomics, which is not typically found in general-purpose
large multimodal models. Building upon Gemini's multimodal models, we develop
several models within the new Med-Gemini family that inherit core capabilities
of Gemini an... | 2024-05-06T04:44:22Z | null | null | null | null | null | null | null | null | null | null |
2,405.03328 | Enhancing Spatiotemporal Disease Progression Models via Latent Diffusion
and Prior Knowledge | ['Lemuel Puglisi', 'Daniel C. Alexander', 'Daniele Ravì'] | ['cs.CV', 'cs.AI'] | In this work, we introduce Brain Latent Progression (BrLP), a novel
spatiotemporal disease progression model based on latent diffusion. BrLP is
designed to predict the evolution of diseases at the individual level on 3D
brain MRIs. Existing deep generative models developed for this task are
primarily data-driven and fa... | 2024-05-06T10:07:16Z | null | null | 10.1007/978-3-031-72069-7_17 | null | null | null | null | null | null | null |
2,405.0352 | Is Sora a World Simulator? A Comprehensive Survey on General World
Models and Beyond | ['Zheng Zhu', 'Xiaofeng Wang', 'Wangbo Zhao', 'Chen Min', 'Nianchen Deng', 'Min Dou', 'Yuqi Wang', 'Botian Shi', 'Kai Wang', 'Chi Zhang', 'Yang You', 'Zhaoxiang Zhang', 'Dawei Zhao', 'Liang Xiao', 'Jian Zhao', 'Jiwen Lu', 'Guan Huang'] | ['cs.CV'] | General world models represent a crucial pathway toward achieving Artificial
General Intelligence (AGI), serving as the cornerstone for various applications
ranging from virtual environments to decision-making systems. Recently, the
emergence of the Sora model has attained significant attention due to its
remarkable si... | 2024-05-06T14:37:07Z | This survey will be regularly updated at:
https://github.com/GigaAI-research/General-World-Models-Survey | null | null | null | null | null | null | null | null | null |
2,405.03548 | MAmmoTH2: Scaling Instructions from the Web | ['Xiang Yue', 'Tuney Zheng', 'Ge Zhang', 'Wenhu Chen'] | ['cs.CL'] | Instruction tuning improves the reasoning abilities of large language models
(LLMs), with data quality and scalability being the crucial factors. Most
instruction tuning data come from human crowd-sourcing or GPT-4 distillation.
We propose a paradigm to efficiently harvest 10 million naturally existing
instruction data... | 2024-05-06T15:11:38Z | null | null | null | null | null | null | null | null | null | null |
2,405.03553 | AlphaMath Almost Zero: Process Supervision without Process | ['Guoxin Chen', 'Minpeng Liao', 'Chengxi Li', 'Kai Fan'] | ['cs.CL', 'cs.AI'] | Although recent advancements in large language models (LLMs) have
significantly improved their performance on various tasks, they still face
challenges with complex and symbolic multi-step reasoning, particularly in
mathematical reasoning. To bolster the mathematical reasoning capabilities of
LLMs, most existing effort... | 2024-05-06T15:20:30Z | Camera ready version for NeurIPS 2024 | null | null | null | null | null | null | null | null | null |
2,405.03594 | Enabling High-Sparsity Foundational Llama Models with Efficient
Pretraining and Deployment | ['Abhinav Agarwalla', 'Abhay Gupta', 'Alexandre Marques', 'Shubhra Pandit', 'Michael Goin', 'Eldar Kurtic', 'Kevin Leong', 'Tuan Nguyen', 'Mahmoud Salem', 'Dan Alistarh', 'Sean Lie', 'Mark Kurtz'] | ['cs.CL', 'cs.AI'] | Large language models (LLMs) have revolutionized Natural Language Processing
(NLP), but their size creates computational bottlenecks. We introduce a novel
approach to create accurate, sparse foundational versions of performant LLMs
that achieve full accuracy recovery for fine-tuning tasks at up to 70%
sparsity. We achi... | 2024-05-06T16:03:32Z | null | null | null | null | null | null | null | null | null | null |
2,405.04299 | ViewFormer: Exploring Spatiotemporal Modeling for Multi-View 3D
Occupancy Perception via View-Guided Transformers | ['Jinke Li', 'Xiao He', 'Chonghua Zhou', 'Xiaoqiang Cheng', 'Yang Wen', 'Dan Zhang'] | ['cs.CV'] | 3D occupancy, an advanced perception technology for driving scenarios,
represents the entire scene without distinguishing between foreground and
background by quantifying the physical space into a grid map. The widely
adopted projection-first deformable attention, efficient in transforming image
features into 3D repres... | 2024-05-07T13:15:07Z | null | null | null | ViewFormer: Exploring Spatiotemporal Modeling for Multi-View 3D Occupancy Perception via View-Guided Transformers | ['Jinke Li', 'Xiao He', 'Chonghua Zhou', 'Xiaoqiang Cheng', 'Yang Wen', 'Dan Zhang'] | 2,024 | European Conference on Computer Vision | 16 | 42 | ['Computer Science'] |
2,405.04324 | Granite Code Models: A Family of Open Foundation Models for Code
Intelligence | ['Mayank Mishra', 'Matt Stallone', 'Gaoyuan Zhang', 'Yikang Shen', 'Aditya Prasad', 'Adriana Meza Soria', 'Michele Merler', 'Parameswaran Selvam', 'Saptha Surendran', 'Shivdeep Singh', 'Manish Sethi', 'Xuan-Hong Dang', 'Pengyuan Li', 'Kun-Lung Wu', 'Syed Zawad', 'Andrew Coleman', 'Matthew White', 'Mark Lewis', 'Raju Pa... | ['cs.AI', 'cs.CL', 'cs.SE'] | Large Language Models (LLMs) trained on code are revolutionizing the software
development process. Increasingly, code LLMs are being integrated into software
development environments to improve the productivity of human programmers, and
LLM-based agents are beginning to show promise for handling complex tasks
autonomou... | 2024-05-07T13:50:40Z | Corresponding Authors: Rameswar Panda, Ruchir Puri; Equal
Contributors: Mayank Mishra, Matt Stallone, Gaoyuan Zhang | null | null | null | null | null | null | null | null | null |
2,405.04434 | DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts
Language Model | ['DeepSeek-AI', 'Aixin Liu', 'Bei Feng', 'Bin Wang', 'Bingxuan Wang', 'Bo Liu', 'Chenggang Zhao', 'Chengqi Dengr', 'Chong Ruan', 'Damai Dai', 'Daya Guo', 'Dejian Yang', 'Deli Chen', 'Dongjie Ji', 'Erhang Li', 'Fangyun Lin', 'Fuli Luo', 'Guangbo Hao', 'Guanting Chen', 'Guowei Li', 'H. Zhang', 'Hanwei Xu', 'Hao Yang', 'H... | ['cs.CL', 'cs.AI'] | We present DeepSeek-V2, a strong Mixture-of-Experts (MoE) language model
characterized by economical training and efficient inference. It comprises 236B
total parameters, of which 21B are activated for each token, and supports a
context length of 128K tokens. DeepSeek-V2 adopts innovative architectures
including Multi-... | 2024-05-07T15:56:43Z | null | null | null | null | null | null | null | null | null | null |
2,405.04517 | xLSTM: Extended Long Short-Term Memory | ['Maximilian Beck', 'Korbinian Pöppel', 'Markus Spanring', 'Andreas Auer', 'Oleksandra Prudnikova', 'Michael Kopp', 'Günter Klambauer', 'Johannes Brandstetter', 'Sepp Hochreiter'] | ['cs.LG', 'cs.AI', 'stat.ML'] | In the 1990s, the constant error carousel and gating were introduced as the
central ideas of the Long Short-Term Memory (LSTM). Since then, LSTMs have
stood the test of time and contributed to numerous deep learning success
stories, in particular they constituted the first Large Language Models (LLMs).
However, the adv... | 2024-05-07T17:50:21Z | Code available at https://github.com/NX-AI/xlstm | null | null | null | null | null | null | null | null | null |
2,405.04532 | QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM
Serving | ['Yujun Lin', 'Haotian Tang', 'Shang Yang', 'Zhekai Zhang', 'Guangxuan Xiao', 'Chuang Gan', 'Song Han'] | ['cs.CL', 'cs.AI', 'cs.LG', 'cs.PF'] | Quantization can accelerate large language model (LLM) inference. Going
beyond INT8 quantization, the research community is actively exploring even
lower precision, such as INT4. Nonetheless, state-of-the-art INT4 quantization
techniques only accelerate low-batch, edge LLM inference, failing to deliver
performance gain... | 2024-05-07T17:59:30Z | The first three authors contribute equally to this project and are
listed in the alphabetical order. Yujun Lin leads the quantization algorithm,
Haotian Tang and Shang Yang lead the GPU kernels and the serving system. Code
is available at https://github.com/mit-han-lab/omniserve | null | null | QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving | ['Yujun Lin', 'Haotian Tang', 'Shang Yang', 'Zhekai Zhang', 'Guangxuan Xiao', 'Chuang Gan', 'Song Han'] | 2,024 | arXiv.org | 98 | 43 | ['Computer Science'] |
2,405.0476 | Large Language Models for Cyber Security: A Systematic Literature Review | ['Hanxiang Xu', 'Shenao Wang', 'Ningke Li', 'Kailong Wang', 'Yanjie Zhao', 'Kai Chen', 'Ting Yu', 'Yang Liu', 'Haoyu Wang'] | ['cs.CR', 'cs.AI'] | The rapid advancement of Large Language Models (LLMs) has opened up new
opportunities for leveraging artificial intelligence in various domains,
including cybersecurity. As the volume and sophistication of cyber threats
continue to grow, there is an increasing need for intelligent systems that can
automatically detect ... | 2024-05-08T02:09:17Z | 56 pages,6 figures | null | null | Large Language Models for Cyber Security: A Systematic Literature Review | ['Hanxiang Xu', 'Shenao Wang', 'Ningke Li', 'Kailong Wang', 'Yanjie Zhao', 'Kai Chen', 'Ting Yu', 'Yang Liu', 'Haoyu Wang'] | 2,024 | arXiv.org | 43 | 230 | ['Computer Science'] |
2,405.04828 | ChuXin: 1.6B Technical Report | ['Xiaomin Zhuang', 'Yufan Jiang', 'Qiaozhi He', 'Zhihua Wu'] | ['cs.CL'] | In this report, we present ChuXin, an entirely open-source language model
with a size of 1.6 billion parameters. Unlike the majority of works that only
open-sourced the model weights and architecture, we have made everything needed
to train a model available, including the training data, the training process,
and the e... | 2024-05-08T05:54:44Z | Technical Report | null | null | null | null | null | null | null | null | null |
2,405.04912 | GP-MoLFormer: A Foundation Model For Molecular Generation | ['Jerret Ross', 'Brian Belgodere', 'Samuel C. Hoffman', 'Vijil Chenthamarakshan', 'Jiri Navratil', 'Youssef Mroueh', 'Payel Das'] | ['q-bio.BM', 'cs.LG', 'physics.chem-ph'] | Transformer-based models trained on large and general purpose datasets
consisting of molecular strings have recently emerged as a powerful tool for
successfully modeling various structure-property relations. Inspired by this
success, we extend the paradigm of training chemical language transformers on
large-scale chemi... | 2024-04-04T16:20:06Z | null | null | null | null | null | null | null | null | null | null |
2,405.05008 | ADELIE: Aligning Large Language Models on Information Extraction | ['Yunjia Qi', 'Hao Peng', 'Xiaozhi Wang', 'Bin Xu', 'Lei Hou', 'Juanzi Li'] | ['cs.CL'] | Large language models (LLMs) usually fall short on information extraction
(IE) tasks and struggle to follow the complex instructions of IE tasks. This
primarily arises from LLMs not being aligned with humans, as mainstream
alignment datasets typically do not include IE data. In this paper, we
introduce ADELIE (Aligning... | 2024-05-08T12:24:52Z | Accepted at EMNLP 2024. Camera-ready version | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.