arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,405.05241
BenthicNet: A global compilation of seafloor images for deep learning applications
['Scott C. Lowe', 'Benjamin Misiuk', 'Isaac Xu', 'Shakhboz Abdulazizov', 'Amit R. Baroi', 'Alex C. Bastos', 'Merlin Best', 'Vicki Ferrini', 'Ariell Friedman', 'Deborah Hart', 'Ove Hoegh-Guldberg', 'Daniel Ierodiaconou', 'Julia Mackin-McLaughlin', 'Kathryn Markey', 'Pedro S. Menandro', 'Jacquomo Monk', 'Shreya Nemani', ...
['cs.CV', 'cs.LG']
Advances in underwater imaging enable collection of extensive seafloor image datasets necessary for monitoring important benthic ecosystems. The ability to collect seafloor imagery has outpaced our capacity to analyze it, hindering mobilization of this crucial environmental information. Machine learning approaches prov...
2024-05-08T17:37:57Z
null
Sci Data 12, 230 (2025)
10.1038/s41597-025-04491-1
null
null
null
null
null
null
null
2,405.05374
Arctic-Embed: Scalable, Efficient, and Accurate Text Embedding Models
['Luke Merrick', 'Danmei Xu', 'Gaurav Nuti', 'Daniel Campos']
['cs.CL', 'cs.AI', 'cs.IR']
This report describes the training dataset creation and recipe behind the family of \texttt{arctic-embed} text embedding models (a set of five models ranging from 22 to 334 million parameters with weights open-sourced under an Apache-2 license). At the time of their release, each model achieved state-of-the-art retriev...
2024-05-08T19:05:18Z
17 pages, 11 Figures, 9 tables
null
null
null
null
null
null
null
null
null
2,405.05376
Kreyòl-MT: Building MT for Latin American, Caribbean and Colonial African Creole Languages
['Nathaniel R. Robinson', 'Raj Dabre', 'Ammon Shurtz', 'Rasul Dent', 'Onenamiyi Onesi', 'Claire Bizon Monroc', 'Loïc Grobol', 'Hasan Muhammad', 'Ashi Garg', 'Naome A. Etori', 'Vijay Murari Tiyyala', 'Olanrewaju Samuel', 'Matthew Dean Stutzman', 'Bismarck Bamfo Odoom', 'Sanjeev Khudanpur', 'Stephen D. Richardson', 'Kent...
['cs.CL']
A majority of language technologies are tailored for a small number of high-resource languages, while relatively many low-resource languages are neglected. One such group, Creole languages, have long been marginalized in academic study, though their speakers could benefit from machine translation (MT). These languages ...
2024-05-08T19:06:19Z
NAACL 2024
null
null
null
null
null
null
null
null
null
2,405.05378
"They are uncultured": Unveiling Covert Harms and Social Threats in LLM Generated Conversations
['Preetam Prabhu Srikar Dammu', 'Hayoung Jung', 'Anjali Singh', 'Monojit Choudhury', 'Tanushree Mitra']
['cs.CL', 'cs.AI', 'cs.CY', 'cs.HC', 'cs.LG']
Large language models (LLMs) have emerged as an integral part of modern societies, powering user-facing applications such as personal assistants and enterprise applications like recruitment tools. Despite their utility, research indicates that LLMs perpetuate systemic biases. Yet, prior works on LLM harms predominantly...
2024-05-08T19:08:45Z
null
null
null
null
null
null
null
null
null
null
2,405.05852
Pre-trained Text-to-Image Diffusion Models Are Versatile Representation Learners for Control
['Gunshi Gupta', 'Karmesh Yadav', 'Yarin Gal', 'Dhruv Batra', 'Zsolt Kira', 'Cong Lu', 'Tim G. J. Rudner']
['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG', 'cs.RO', 'stat.ML']
Embodied AI agents require a fine-grained understanding of the physical world mediated through visual and language inputs. Such capabilities are difficult to learn solely from task-specific data. This has led to the emergence of pre-trained vision-language models as a tool for transferring representations learned from ...
2024-05-09T15:39:54Z
null
null
null
null
null
null
null
null
null
null
2,405.05945
Lumina-T2X: Transforming Text into Any Modality, Resolution, and Duration via Flow-based Large Diffusion Transformers
['Peng Gao', 'Le Zhuo', 'Dongyang Liu', 'Ruoyi Du', 'Xu Luo', 'Longtian Qiu', 'Yuhang Zhang', 'Chen Lin', 'Rongjie Huang', 'Shijie Geng', 'Renrui Zhang', 'Junlin Xi', 'Wenqi Shao', 'Zhengkai Jiang', 'Tianshuo Yang', 'Weicai Ye', 'He Tong', 'Jingwen He', 'Yu Qiao', 'Hongsheng Li']
['cs.CV']
Sora unveils the potential of scaling Diffusion Transformer for generating photorealistic images and videos at arbitrary resolutions, aspect ratios, and durations, yet it still lacks sufficient implementation details. In this technical report, we introduce the Lumina-T2X family - a series of Flow-based Large Diffusion ...
2024-05-09T17:35:16Z
Technical Report; Code at: https://github.com/Alpha-VLLM/Lumina-T2X
null
null
Lumina-T2X: Transforming Text into Any Modality, Resolution, and Duration via Flow-based Large Diffusion Transformers
['Peng Gao', 'Le Zhuo', 'Ziyi Lin', 'Chris Liu', 'Junsong Chen', 'Ruoyi Du', 'Enze Xie', 'Xu Luo', 'Longtian Qiu', 'Yuhang Zhang', 'Chen Lin', 'Rongjie Huang', 'Shijie Geng', 'Renrui Zhang', 'Junlin Xi', 'Wenqi Shao', 'Zhengkai Jiang', 'Tianshuo Yang', 'Weicai Ye', 'He Tong', 'Jingwen He', 'Y. Qiao', 'Hongsheng Li']
2,024
arXiv.org
91
164
['Computer Science']
2,405.05999
LLMPot: Dynamically Configured LLM-based Honeypot for Industrial Protocol and Physical Process Emulation
['Christoforos Vasilatos', 'Dunia J. Mahboobeh', 'Hithem Lamri', 'Manaar Alam', 'Michail Maniatakos']
['cs.CR', 'cs.LG']
Industrial Control Systems (ICS) are extensively used in critical infrastructures ensuring efficient, reliable, and continuous operations. However, their increasing connectivity and addition of advanced features make them vulnerable to cyber threats, potentially leading to severe disruptions in essential services. In t...
2024-05-09T09:37:22Z
null
null
null
LLMPot: Dynamically Configured LLM-based Honeypot for Industrial Protocol and Physical Process Emulation
['Christoforos Vasilatos', 'D. Mahboobeh', 'Hithem Lamri', 'Manaar Alam', 'Michail Maniatakos']
2,024
null
4
71
['Computer Science']
2,405.06067
HMT: Hierarchical Memory Transformer for Efficient Long Context Language Processing
['Zifan He', 'Yingqi Cao', 'Zongyue Qin', 'Neha Prakriya', 'Yizhou Sun', 'Jason Cong']
['cs.CL', 'cs.LG']
Transformer-based large language models (LLM) have been widely used in language processing applications. However, due to the memory constraints of the devices, most of them restrict the context window. Even though recurrent models in previous works can memorize past tokens to enable unlimited context and maintain effec...
2024-05-09T19:32:49Z
NAACL 2025 Main Conference
null
null
null
null
null
null
null
null
null
2,405.06239
SaudiBERT: A Large Language Model Pretrained on Saudi Dialect Corpora
['Faisal Qarah']
['cs.CL', 'cs.AI']
In this paper, we introduce SaudiBERT, a monodialect Arabic language model pretrained exclusively on Saudi dialectal text. To demonstrate the model's effectiveness, we compared SaudiBERT with six different multidialect Arabic language models across 11 evaluation datasets, which are divided into two groups: sentiment an...
2024-05-10T04:22:54Z
null
null
null
null
null
null
null
null
null
null
2,405.06461
SketchDream: Sketch-based Text-to-3D Generation and Editing
['Feng-Lin Liu', 'Hongbo Fu', 'Yu-Kun Lai', 'Lin Gao']
['cs.GR']
Existing text-based 3D generation methods generate attractive results but lack detailed geometry control. Sketches, known for their conciseness and expressiveness, have contributed to intuitive 3D modeling but are confined to producing texture-less mesh models within predefined categories. Integrating sketch and text s...
2024-05-10T13:13:46Z
null
null
null
null
null
null
null
null
null
null
2,405.0664
Linearizing Large Language Models
['Jean Mercat', 'Igor Vasiljevic', 'Sedrick Keh', 'Kushal Arora', 'Achal Dave', 'Adrien Gaidon', 'Thomas Kollar']
['cs.CL']
Linear transformers have emerged as a subquadratic-time alternative to softmax attention and have garnered significant interest due to their fixed-size recurrent state that lowers inference cost. However, their original formulation suffers from poor scaling and underperforms compute-matched transformers. Recent linear ...
2024-05-10T17:59:08Z
null
null
null
null
null
null
null
null
null
null
2,405.06694
SUTRA: Scalable Multilingual Language Model Architecture
['Abhijit Bendale', 'Michael Sapienza', 'Steven Ripplinger', 'Simon Gibbs', 'Jaewon Lee', 'Pranav Mistry']
['cs.CL', 'cs.AI']
In this paper, we introduce SUTRA, multilingual Large Language Model architecture capable of understanding, reasoning, and generating text in over 50 languages. SUTRA's design uniquely decouples core conceptual understanding from language-specific processing, which facilitates scalable and efficient multilingual alignm...
2024-05-07T20:11:44Z
null
null
null
SUTRA: Scalable Multilingual Language Model Architecture
['Abhijit Bendale', 'Michael Sapienza', 'Steven Ripplinger', 'Simon Gibbs', 'Jaewon Lee', 'Pranav Mistry']
2,024
arXiv.org
5
38
['Computer Science']
2,405.06932
Piccolo2: General Text Embedding with Multi-task Hybrid Loss Training
['Junqin Huang', 'Zhongjie Hu', 'Zihao Jing', 'Mengya Gao', 'Yichao Wu']
['cs.CL', 'cs.AI']
In this report, we introduce Piccolo2, an embedding model that surpasses other models in the comprehensive evaluation over 6 tasks on CMTEB benchmark, setting a new state-of-the-art. Piccolo2 primarily leverages an efficient multi-task hybrid loss training approach, effectively harnessing textual data and labels from d...
2024-05-11T06:32:08Z
tech report
null
null
Piccolo2: General Text Embedding with Multi-task Hybrid Loss Training
['Junqin Huang', 'Zhongjie Hu', 'Zihao Jing', 'Mengya Gao', 'Yichao Wu']
2,024
arXiv.org
6
33
['Computer Science']
2,405.07101
Advanced Natural-based interaction for the ITAlian language: LLaMAntino-3-ANITA
['Marco Polignano', 'Pierpaolo Basile', 'Giovanni Semeraro']
['cs.CL', 'cs.AI']
In the pursuit of advancing natural language processing for the Italian language, we introduce a state-of-the-art Large Language Model (LLM) based on the novel Meta LLaMA-3 model: LLaMAntino-3-ANITA-8B-Inst-DPO-ITA. We fine-tuned the original 8B parameters instruction tuned model using the Supervised Fine-tuning (SFT) ...
2024-05-11T22:02:55Z
null
null
null
Advanced Natural-based interaction for the ITAlian language: LLaMAntino-3-ANITA
['Marco Polignano', 'Pierpaolo Basile', 'Giovanni Semeraro']
2,024
arXiv.org
20
29
['Computer Science']
2,405.07615
ViWikiFC: Fact-Checking for Vietnamese Wikipedia-Based Textual Knowledge Source
['Hung Tuan Le', 'Long Truong To', 'Manh Trong Nguyen', 'Kiet Van Nguyen']
['cs.CL']
Fact-checking is essential due to the explosion of misinformation in the media ecosystem. Although false information exists in every language and country, most research to solve the problem mainly concentrated on huge communities like English and Chinese. Low-resource languages like Vietnamese are necessary to explore ...
2024-05-13T10:24:05Z
null
null
null
null
null
null
null
null
null
null
2,405.07703
OpenLLM-Ro -- Technical Report on Open-source Romanian LLMs
['Mihai Masala', 'Denis C. Ilie-Ablachim', 'Dragos Corlatescu', 'Miruna Zavelca', 'Marius Leordeanu', 'Horia Velicu', 'Marius Popescu', 'Mihai Dascalu', 'Traian Rebedea']
['cs.CL']
In recent years, Large Language Models (LLMs) have achieved almost human-like performance on various tasks. While some LLMs have been trained on multilingual data, most of the training data is in English. Hence, their performance in English greatly exceeds their performance in other languages. This document presents ou...
2024-05-13T12:46:11Z
null
null
null
null
null
null
null
null
null
null
2,405.07719
USP: A Unified Sequence Parallelism Approach for Long Context Generative AI
['Jiarui Fang', 'Shangchun Zhao']
['cs.LG', 'cs.AI']
Sequence parallelism (SP), which divides the sequence dimension of input tensors across multiple computational devices, is becoming key to unlocking the long-context capabilities of generative AI models. This paper investigates the state-of-the-art SP approaches, i.e. DeepSpeed-Ulysses and Ring-Attention, and proposes ...
2024-05-13T13:08:02Z
null
null
null
null
null
null
null
null
null
null
2,405.07778
A Comprehensive Analysis of Static Word Embeddings for Turkish
['Karahan Sarıtaş', 'Cahid Arda Öz', 'Tunga Güngör']
['cs.CL', 'cs.AI']
Word embeddings are fixed-length, dense and distributed word representations that are used in natural language processing (NLP) applications. There are basically two types of word embedding models which are non-contextual (static) models and contextual models. The former method generates a single embedding for a word r...
2024-05-13T14:23:37Z
null
Expert Systems with Applications Volume 252, Part A, 15 October 2024, 124123
10.1016/j.eswa.2024.124123
A Comprehensive Analysis of Static Word Embeddings for Turkish
['Karahan Saritas', 'Cahid Arda Öz', 'Tunga Güngör']
2,024
Expert systems with applications
4
60
['Computer Science']
2,405.07813
Localizing Task Information for Improved Model Merging and Compression
['Ke Wang', 'Nikolaos Dimitriadis', 'Guillermo Ortiz-Jimenez', 'François Fleuret', 'Pascal Frossard']
['cs.LG', 'cs.CV']
Model merging and task arithmetic have emerged as promising scalable approaches to merge multiple single-task checkpoints to one multi-task model, but their applicability is reduced by significant performance loss. Previous works have linked these drops to interference in the weight space and erasure of important task-...
2024-05-13T14:54:37Z
Accepted ICML 2024; The first two authors contributed equally to this work; Project website: https://tall-masks.github.io
null
null
null
null
null
null
null
null
null
2,405.07863
RLHF Workflow: From Reward Modeling to Online RLHF
['Hanze Dong', 'Wei Xiong', 'Bo Pang', 'Haoxiang Wang', 'Han Zhao', 'Yingbo Zhou', 'Nan Jiang', 'Doyen Sahoo', 'Caiming Xiong', 'Tong Zhang']
['cs.LG', 'cs.AI', 'cs.CL', 'stat.ML']
We present the workflow of Online Iterative Reinforcement Learning from Human Feedback (RLHF) in this technical report, which is widely reported to outperform its offline counterpart by a large margin in the recent large language model (LLM) literature. However, existing open-source RLHF projects are still largely conf...
2024-05-13T15:50:39Z
Published in Transactions on Machine Learning Research (09/2024)
null
null
null
null
null
null
null
null
null
2,405.07883
Zero-Shot Tokenizer Transfer
['Benjamin Minixhofer', 'Edoardo Maria Ponti', 'Ivan Vulić']
['cs.CL']
Language models (LMs) are bound to their tokenizer, which maps raw text to a sequence of vocabulary items (tokens). This restricts their flexibility: for example, LMs trained primarily on English may still perform well in other natural and programming languages, but have vastly decreased efficiency due to their English...
2024-05-13T16:17:10Z
null
null
null
null
null
null
null
null
null
null
2,405.07913
CTRLorALTer: Conditional LoRAdapter for Efficient 0-Shot Control & Altering of T2I Models
['Nick Stracke', 'Stefan Andreas Baumann', 'Joshua M. Susskind', 'Miguel Angel Bautista', 'Björn Ommer']
['cs.CV']
Text-to-image generative models have become a prominent and powerful tool that excels at generating high-resolution realistic images. However, guiding the generative process of these models to consider detailed forms of conditioning reflecting style and/or structure information remains an open problem. In this paper, w...
2024-05-13T16:46:44Z
for the project page and code, view https://compvis.github.io/LoRAdapter/
null
null
CTRLorALTer: Conditional LoRAdapter for Efficient 0-Shot Control & Altering of T2I Models
['Nick Stracke', 'Stefan Andreas Baumann', 'J. Susskind', 'Miguel Angel Bautista', 'Bjorn Ommer']
2,024
arXiv.org
3
51
['Computer Science']
2,405.0792
Rank-DistiLLM: Closing the Effectiveness Gap Between Cross-Encoders and LLMs for Passage Re-Ranking
['Ferdinand Schlatt', 'Maik Fröbe', 'Harrisen Scells', 'Shengyao Zhuang', 'Bevan Koopman', 'Guido Zuccon', 'Benno Stein', 'Martin Potthast', 'Matthias Hagen']
['cs.IR']
Cross-encoders distilled from large language models (LLMs) are often more effective re-rankers than cross-encoders fine-tuned on manually labeled data. However, distilled models do not match the effectiveness of their teacher LLMs. We hypothesize that this effectiveness gap is due to the fact that previous work has not...
2024-05-13T16:51:53Z
Accepted at ECIR'25
null
10.1007/978-3-031-88714-7_31
Rank-DistiLLM: Closing the Effectiveness Gap Between Cross-Encoders and LLMs for Passage Re-ranking
['Ferdinand Schlatt', 'Maik Frobe', 'Harrisen Scells', 'Shengyao Zhuang', 'B. Koopman', 'G. Zuccon', 'Benno Stein', 'Martin Potthast', 'Matthias Hagen']
2,024
European Conference on Information Retrieval
6
70
['Computer Science']
2,405.0794
RAID: A Shared Benchmark for Robust Evaluation of Machine-Generated Text Detectors
['Liam Dugan', 'Alyssa Hwang', 'Filip Trhlik', 'Josh Magnus Ludan', 'Andrew Zhu', 'Hainiu Xu', 'Daphne Ippolito', 'Chris Callison-Burch']
['cs.CL', 'I.2.7']
Many commercial and open-source models claim to detect machine-generated text with extremely high accuracy (99% or more). However, very few of these detectors are evaluated on shared benchmark datasets and even when they are, the datasets used for evaluation are insufficiently challenging-lacking variations in sampling...
2024-05-13T17:15:14Z
ACL 2024
null
null
RAID: A Shared Benchmark for Robust Evaluation of Machine-Generated Text Detectors
['Liam Dugan', 'Alyssa Hwang', 'Filip Trhlik', 'Josh Magnus Ludan', 'Andrew Zhu', 'Hainiu Xu', 'Daphne Ippolito', 'Christopher Callison-Burch']
2,024
Annual Meeting of the Association for Computational Linguistics
52
94
['Computer Science']
2,405.0796
AgentClinic: a multimodal agent benchmark to evaluate AI in simulated clinical environments
['Samuel Schmidgall', 'Rojin Ziaei', 'Carl Harris', 'Eduardo Reis', 'Jeffrey Jopling', 'Michael Moor']
['cs.HC', 'cs.CL']
Evaluating large language models (LLM) in clinical scenarios is crucial to assessing their potential clinical utility. Existing benchmarks rely heavily on static question-answering, which does not accurately depict the complex, sequential nature of clinical decision-making. Here, we introduce AgentClinic, a multimodal ...
2024-05-13T17:38:53Z
null
null
null
null
null
null
null
null
null
null
2,405.07988
MedVersa: A Generalist Foundation Model for Medical Image Interpretation
['Hong-Yu Zhou', 'Julián Nicolás Acosta', 'Subathra Adithan', 'Suvrankar Datta', 'Eric J. Topol', 'Pranav Rajpurkar']
['cs.CV']
Current medical AI systems are often limited to narrow applications, hindering widespread adoption. We present MedVersa, a generalist foundation model trained on tens of millions of compiled medical instances. MedVersa unlocks generalist learning from multimodal inputs and outputs, representing the first example of a g...
2024-05-13T17:58:51Z
Technical study
null
null
MedVersa: A Generalist Foundation Model for Medical Image Interpretation
['Hong-Yu Zhou', 'J. N. Acosta', 'Subathra Adithan', 'Suvrankar Datta', 'E. Topol', 'P. Rajpurkar']
2,024
null
29
0
['Computer Science']
2,405.07992
MambaOut: Do We Really Need Mamba for Vision?
['Weihao Yu', 'Xinchao Wang']
['cs.CV', 'cs.AI', 'cs.LG']
Mamba, an architecture with RNN-like token mixer of state space model (SSM), was recently introduced to address the quadratic complexity of the attention mechanism and subsequently applied to vision tasks. Nevertheless, the performance of Mamba for vision is often underwhelming when compared with convolutional and atte...
2024-05-13T17:59:56Z
Code: https://github.com/yuweihao/MambaOut
null
null
null
null
null
null
null
null
null
2,405.08553
Improving Transformers with Dynamically Composable Multi-Head Attention
['Da Xiao', 'Qingye Meng', 'Shengping Li', 'Xingyuan Yuan']
['cs.LG', 'cs.CL']
Multi-Head Attention (MHA) is a key component of Transformer. In MHA, attention heads work independently, causing problems such as low-rank bottleneck of attention score matrices and head redundancy. We propose Dynamically Composable Multi-Head Attention (DCMHA), a parameter and computation efficient attention architec...
2024-05-14T12:41:11Z
Accepted to the 41st International Conference on Machine Learning (ICML'24 oral)
null
null
null
null
null
null
null
null
null
2,405.08748
Hunyuan-DiT: A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding
['Zhimin Li', 'Jianwei Zhang', 'Qin Lin', 'Jiangfeng Xiong', 'Yanxin Long', 'Xinchi Deng', 'Yingfang Zhang', 'Xingchao Liu', 'Minbin Huang', 'Zedong Xiao', 'Dayou Chen', 'Jiajun He', 'Jiahao Li', 'Wenyue Li', 'Chen Zhang', 'Rongwei Quan', 'Jianxiang Lu', 'Jiabin Huang', 'Xiaoyan Yuan', 'Xiaoxiao Zheng', 'Yixuan Li', 'J...
['cs.CV']
We present Hunyuan-DiT, a text-to-image diffusion transformer with fine-grained understanding of both English and Chinese. To construct Hunyuan-DiT, we carefully design the transformer structure, text encoder, and positional encoding. We also build from scratch a whole data pipeline to update and evaluate data for iter...
2024-05-14T16:33:25Z
Project Page: https://dit.hunyuan.tencent.com/
null
null
null
null
null
null
null
null
null
2,405.09055
A safety realignment framework via subspace-oriented model fusion for large language models
['Xin Yi', 'Shunfan Zheng', 'Linlin Wang', 'Xiaoling Wang', 'Liang He']
['cs.CL']
The current safeguard mechanisms for large language models (LLMs) are indeed susceptible to jailbreak attacks, making them inherently fragile. Even the process of fine-tuning on apparently benign data for downstream tasks can jeopardize safety. One potential solution is to conduct safety fine-tuning subsequent to downs...
2024-05-15T03:04:05Z
null
null
null
null
null
null
null
null
null
null
2,405.09111
CarDreamer: Open-Source Learning Platform for World Model based Autonomous Driving
['Dechen Gao', 'Shuangyu Cai', 'Hanchu Zhou', 'Hang Wang', 'Iman Soltani', 'Junshan Zhang']
['cs.RO', 'cs.AI']
To safely navigate intricate real-world scenarios, autonomous vehicles must be able to adapt to diverse road conditions and anticipate future events. World model (WM) based reinforcement learning (RL) has emerged as a promising approach by learning and predicting the complex dynamics of various environments. Neverthele...
2024-05-15T05:57:20Z
Dechen Gao, Shuangyu Cai, Hanchu Zhou, Hang Wang contributed equally
null
null
null
null
null
null
null
null
null
2,405.09215
Xmodel-VLM: A Simple Baseline for Multimodal Vision Language Model
['Wanting Xu', 'Yang Liu', 'Langping He', 'Xucheng Huang', 'Ling Jiang']
['cs.CV', 'cs.AI']
We introduce Xmodel-VLM, a cutting-edge multimodal vision language model. It is designed for efficient deployment on consumer GPU servers. Our work directly confronts a pivotal industry issue by grappling with the prohibitive service costs that hinder the broad adoption of large-scale multimodal systems. Through rigoro...
2024-05-15T09:47:59Z
null
null
null
Xmodel-VLM: A Simple Baseline for Multimodal Vision Language Model
['Wanting Xu', 'Yang Liu', 'Langping He', 'Xucheng Huang', 'Ling Jiang']
2,024
arXiv.org
2
49
['Computer Science']
2,405.09318
Transfer Learning in Pre-Trained Large Language Models for Malware Detection Based on System Calls
['Pedro Miguel Sánchez Sánchez', 'Alberto Huertas Celdrán', 'Gérôme Bovet', 'Gregorio Martínez Pérez']
['cs.CR', 'cs.LG']
In the current cybersecurity landscape, protecting military devices such as communication and battlefield management systems against sophisticated cyber attacks is crucial. Malware exploits vulnerabilities through stealth methods, often evading traditional detection mechanisms such as software signatures. The applicati...
2024-05-15T13:19:43Z
Submitted to IEEE MILCOM 2024
null
null
Transfer Learning in Pre-Trained Large Language Models for Malware Detection Based on System Calls
['P. Sánchez', 'Alberto Huertas Celdrán', 'Gérôme Bovet', 'Gregorio Martínez Pérez']
2,024
IEEE Military Communications Conference
11
24
['Computer Science']
2,405.09365
SARATR-X: Toward Building A Foundation Model for SAR Target Recognition
['Weijie Li', 'Wei Yang', 'Yuenan Hou', 'Li Liu', 'Yongxiang Liu', 'Xiang Li']
['cs.CV']
Despite the remarkable progress in synthetic aperture radar automatic target recognition (SAR ATR), recent efforts have concentrated on detecting and classifying a specific category, e.g., vehicles, ships, airplanes, or buildings. One of the fundamental limitations of the top-performing SAR ATR methods is that the lear...
2024-05-15T14:17:44Z
20 pages, 9 figures
null
null
SARATR-X: Toward Building a Foundation Model for SAR Target Recognition
['Wei-Jang Li', 'Wei Yang', 'Yuenan Hou', 'Li Liu', 'Yongxiang Liu', 'Xiang Li']
2,024
IEEE Transactions on Image Processing
11
132
['Medicine', 'Computer Science']
2,405.09605
Elements of World Knowledge (EWoK): A Cognition-Inspired Framework for Evaluating Basic World Knowledge in Language Models
['Anna A. Ivanova', 'Aalok Sathe', 'Benjamin Lipkin', 'Unnathi Kumar', 'Setayesh Radkani', 'Thomas H. Clark', 'Carina Kauf', 'Jennifer Hu', 'R. T. Pramod', 'Gabriel Grand', 'Vivian Paulun', 'Maria Ryskina', 'Ekin Akyürek', 'Ethan Wilcox', 'Nafisa Rashid', 'Leshem Choshen', 'Roger Levy', 'Evelina Fedorenko', 'Joshua Ten...
['cs.CL', 'cs.AI', 'cs.LG']
The ability to build and reason about models of the world is essential for situated language understanding. But evaluating world modeling capabilities in modern AI systems -- especially those based on language models -- has proven challenging, in large part because of the difficulty of disentangling conceptual knowledg...
2024-05-15T17:19:42Z
Accepted to Transactions of the ACL (TACL). Contains 25 pages (14 main), 6 figures. Visit http://ewok-core.github.io for data and code. Authors Anna Ivanova, Aalok Sathe, Benjamin Lipkin contributed equally
null
null
null
null
null
null
null
null
null
2,405.09673
LoRA Learns Less and Forgets Less
['Dan Biderman', 'Jacob Portes', 'Jose Javier Gonzalez Ortiz', 'Mansheej Paul', 'Philip Greengard', 'Connor Jennings', 'Daniel King', 'Sam Havens', 'Vitaliy Chiley', 'Jonathan Frankle', 'Cody Blakeney', 'John P. Cunningham']
['cs.LG', 'cs.AI', 'cs.CL']
Low-Rank Adaptation (LoRA) is a widely-used parameter-efficient finetuning method for large language models. LoRA saves memory by training only low rank perturbations to selected weight matrices. In this work, we compare the performance of LoRA and full finetuning on two target domains, programming and mathematics. We ...
2024-05-15T19:27:45Z
Final version with new experiments and analyses, as accepted to Transactions on Machine Learning Research, August 2024 (Featured Certification). https://openreview.net/forum?id=aloEru2qCG&noteId=Jb3PQNQDI2
null
null
LoRA Learns Less and Forgets Less
['D. Biderman', 'Jose Gonzalez Ortiz', 'Jacob Portes', 'Mansheej Paul', 'Philip Greengard', 'Connor Jennings', 'Daniel King', 'Sam Havens', 'Vitaliy Chiley', 'Jonathan Frankle', 'Cody Blakeney', 'John P. Cunningham']
2,024
Trans. Mach. Learn. Res.
142
89
['Computer Science']
2,405.09814
Semantic Gesticulator: Semantics-Aware Co-Speech Gesture Synthesis
['Zeyi Zhang', 'Tenglong Ao', 'Yuyao Zhang', 'Qingzhe Gao', 'Chuan Lin', 'Baoquan Chen', 'Libin Liu']
['cs.GR', 'cs.CV', 'cs.SD', 'eess.AS']
In this work, we present Semantic Gesticulator, a novel framework designed to synthesize realistic gestures accompanying speech with strong semantic correspondence. Semantically meaningful gestures are crucial for effective non-verbal communication, but such gestures often fall within the long tail of the distribution ...
2024-05-16T05:09:01Z
SIGGRAPH 2024 (Journal Track); Project page: https://pku-mocca.github.io/Semantic-Gesticulator-Page
null
null
Semantic Gesticulator: Semantics-Aware Co-Speech Gesture Synthesis
['Zeyi Zhang', 'Tenglong Ao', 'Yuyao Zhang', 'Qingzhe Gao', 'Chuan Lin', 'Baoquan Chen', 'Libin Liu']
2,024
ACM Transactions on Graphics
17
32
['Computer Science', 'Engineering']
2,405.09818
Chameleon: Mixed-Modal Early-Fusion Foundation Models
['Chameleon Team']
['cs.CL']
We present Chameleon, a family of early-fusion token-based mixed-modal models capable of understanding and generating images and text in any arbitrary sequence. We outline a stable training approach from inception, an alignment recipe, and an architectural parameterization tailored for the early-fusion, token-based, mi...
2024-05-16T05:23:41Z
null
null
null
null
null
null
null
null
null
null
2,405.09927
Moreau Envelope for Nonconvex Bi-Level Optimization: A Single-loop and Hessian-free Solution Strategy
['Risheng Liu', 'Zhu Liu', 'Wei Yao', 'Shangzhi Zeng', 'Jin Zhang']
['math.OC', 'cs.LG']
This work focuses on addressing two major challenges in the context of large-scale nonconvex Bi-Level Optimization (BLO) problems, which are increasingly applied in machine learning due to their ability to model nested structures. These challenges involve ensuring computational efficiency and providing theoretical guar...
2024-05-16T09:33:28Z
Accepted by ICML 2024
null
null
null
null
null
null
null
null
null
2,405.1014
Libra: Building Decoupled Vision System on Large Language Models
['Yifan Xu', 'Xiaoshan Yang', 'Yaguang Song', 'Changsheng Xu']
['cs.CV']
In this work, we introduce Libra, a prototype model with a decoupled vision system on a large language model (LLM). The decoupled vision system decouples inner-modal modeling and cross-modal interaction, yielding unique visual information modeling and effective cross-modal comprehension. Libra is trained through discre...
2024-05-16T14:34:44Z
ICML2024
null
null
Libra: Building Decoupled Vision System on Large Language Models
['Yifan Xu', 'Xiaoshan Yang', 'Y. Song', 'Changsheng Xu']
2,024
International Conference on Machine Learning
8
79
['Computer Science']
2,405.10243
DocuMint: Docstring Generation for Python using Small Language Models
['Bibek Poudel', 'Adam Cook', 'Sekou Traore', 'Shelah Ameli']
['cs.SE', 'cs.LG']
Effective communication, specifically through documentation, is the beating heart of collaboration among contributors in software development. Recent advancements in language models (LMs) have enabled the introduction of a new type of actor in that ecosystem: LM-powered assistants capable of code generation, optimizati...
2024-05-16T16:46:46Z
12 pages, 4 figures
null
null
null
null
null
null
null
null
null
2,405.10254
PRISM: A Multi-Modal Generative Foundation Model for Slide-Level Histopathology
['George Shaikovski', 'Adam Casson', 'Kristen Severson', 'Eric Zimmermann', 'Yi Kan Wang', 'Jeremy D. Kunz', 'Juan A. Retamero', 'Gerard Oakley', 'David Klimstra', 'Christopher Kanan', 'Matthew Hanna', 'Michal Zelechowski', 'Julian Viret', 'Neil Tenenholtz', 'James Hall', 'Nicolo Fusi', 'Razik Yousfi', 'Peter Hamilton'...
['eess.IV', 'cs.CV', 'cs.LG']
Foundation models in computational pathology promise to unlock the development of new clinical decision support systems and models for precision medicine. However, there is a mismatch between most clinical analysis, which is defined at the level of one or more whole slide images, and foundation models to date, which pr...
2024-05-16T16:59:12Z
null
null
null
PRISM: A Multi-Modal Generative Foundation Model for Slide-Level Histopathology
['George Shaikovski', 'Adam Casson', 'Kristen Severson', 'Eric Zimmermann', 'Yi Kan Wang', 'J. Kunz', 'J. Retamero', 'Gerard Oakley', 'D. Klimstra', 'C. Kanan', 'Matthew G Hanna', 'Michal Zelechowski', 'Julian Viret', 'Neil Tenenholtz', 'James Hall', 'Nicolò Fusi', 'Razik Yousfi', 'Peter Hamilton', 'William A. Moye', '...
2,024
arXiv.org
35
48
['Computer Science', 'Engineering']
2,405.10315
TRANSIC: Sim-to-Real Policy Transfer by Learning from Online Correction
['Yunfan Jiang', 'Chen Wang', 'Ruohan Zhang', 'Jiajun Wu', 'Li Fei-Fei']
['cs.RO', 'cs.AI', 'cs.LG']
Learning in simulation and transferring the learned policy to the real world has the potential to enable generalist robots. The key challenge of this approach is to address simulation-to-reality (sim-to-real) gaps. Previous methods often require domain-specific knowledge a priori. We argue that a straightforward way to...
2024-05-16T17:59:07Z
8th Conference on Robot Learning (CoRL 2024), Munich, Germany. Project website: https://transic-robot.github.io/
null
null
null
null
null
null
null
null
null
2,405.10517
Towards Better Question Generation in QA-based Event Extraction
['Zijin Hong', 'Jian Liu']
['cs.CL']
Event Extraction (EE) is an essential information extraction task that aims to extract event-related information from unstructured texts. The paradigm of this task has shifted from conventional classification-based methods to more contemporary question-answering-based (QA-based) approaches. However, in QA-based EE, the...
2024-05-17T03:52:01Z
Accepted to ACL2024 Findings
null
null
null
null
null
null
null
null
null
2,405.10637
Layer-Condensed KV Cache for Efficient Inference of Large Language Models
['Haoyi Wu', 'Kewei Tu']
['cs.CL']
Huge memory consumption has been a major bottleneck for deploying high-throughput large language models in real-world applications. In addition to the large number of parameters, the key-value (KV) cache for the attention mechanism in the transformer architecture consumes a significant amount of memory, especially when...
2024-05-17T08:59:46Z
Accepted to ACL2024 main conference
null
null
Layer-Condensed KV Cache for Efficient Inference of Large Language Models
['Haoyi Wu', 'Kewei Tu']
2,024
Annual Meeting of the Association for Computational Linguistics
19
38
['Computer Science']
2,405.10725
INDUS: Effective and Efficient Language Models for Scientific Applications
['Bishwaranjan Bhattacharjee', 'Aashka Trivedi', 'Masayasu Muraoka', 'Muthukumaran Ramasubramanian', 'Takuma Udagawa', 'Iksha Gurung', 'Nishan Pantha', 'Rong Zhang', 'Bharath Dandala', 'Rahul Ramachandran', 'Manil Maskey', 'Kaylin Bugbee', 'Mike Little', 'Elizabeth Fancher', 'Irina Gerasimov', 'Armin Mehrabian', 'Laure...
['cs.CL', 'cs.IR']
Large language models (LLMs) trained on general domain corpora showed remarkable results on natural language processing (NLP) tasks. However, previous research demonstrated LLMs trained using domain-focused corpora perform better on specialized tasks. Inspired by this insight, we developed INDUS, a comprehensive suite ...
2024-05-17T12:15:07Z
EMNLP 2024 (Industry Track)
null
null
null
null
null
null
null
null
null
2,405.11143
OpenRLHF: An Easy-to-use, Scalable and High-performance RLHF Framework
['Jian Hu', 'Xibin Wu', 'Wei Shen', 'Jason Klein Liu', 'Zilin Zhu', 'Weixun Wang', 'Songlin Jiang', 'Haoran Wang', 'Hao Chen', 'Bin Chen', 'Weikai Fang', 'Xianyu', 'Yu Cao', 'Haotian Xu', 'Yiming Liu']
['cs.AI', 'cs.CL', 'cs.LG']
Large Language Models (LLMs) fine-tuned via Reinforcement Learning from Human Feedback (RLHF) and Reinforcement Learning with Verifiable Rewards (RLVR) significantly improve the alignment of human-AI values and further raise the upper bound of AI capabilities, particularly in reasoning-intensive, long-context Chain-of-...
2024-05-20T01:04:40Z
null
null
null
OpenRLHF: An Easy-to-use, Scalable and High-performance RLHF Framework
['Jian Hu', 'Xibin Wu', 'Weixun Wang', 'Dehao Zhang', 'Yu Cao', 'OpenLLMAI Team', 'Netease Fuxi', 'AI Lab', 'Alibaba Group']
2,024
arXiv.org
130
30
['Computer Science']
2,405.1129
MBIAS: Mitigating Bias in Large Language Models While Retaining Context
['Shaina Raza', 'Ananya Raval', 'Veronica Chatrath']
['cs.CL']
The deployment of Large Language Models (LLMs) in diverse applications necessitates an assurance of safety without compromising the contextual integrity of the generated content. Traditional approaches, including safety-specific fine-tuning or adversarial testing, often yield safe outputs at the expense of contextual m...
2024-05-18T13:31:12Z
null
null
null
null
null
null
null
null
null
null
2,405.11403
MapCoder: Multi-Agent Code Generation for Competitive Problem Solving
['Md. Ashraful Islam', 'Mohammed Eunus Ali', 'Md Rizwan Parvez']
['cs.CL', 'cs.AI']
Code synthesis, which requires a deep understanding of complex natural language problem descriptions, generation of code instructions for complex algorithms and data structures, and the successful execution of comprehensive unit tests, presents a significant challenge. While large language models (LLMs) demonstrate imp...
2024-05-18T22:10:15Z
null
null
null
null
null
null
null
null
null
null
2,405.11449
NetMamba: Efficient Network Traffic Classification via Pre-training Unidirectional Mamba
['Tongze Wang', 'Xiaohui Xie', 'Wenduo Wang', 'Chuyi Wang', 'Youjian Zhao', 'Yong Cui']
['cs.LG', 'cs.NI']
Network traffic classification is a crucial research area aiming to enhance service quality, streamline network management, and bolster cybersecurity. To address the growing complexity of transmission encryption techniques, various machine learning and deep learning methods have been proposed. However, existing approac...
2024-05-19T04:58:53Z
null
null
null
null
null
null
null
null
null
null
2,405.11582
SLAB: Efficient Transformers with Simplified Linear Attention and Progressive Re-parameterized Batch Normalization
['Jialong Guo', 'Xinghao Chen', 'Yehui Tang', 'Yunhe Wang']
['cs.CV', 'cs.CL']
Transformers have become foundational architectures for both natural language and computer vision tasks. However, the high computational cost makes it quite challenging to deploy on resource-constraint devices. This paper investigates the computational bottleneck modules of efficient transformer, i.e., normalization la...
2024-05-19T15:22:25Z
ICML 2024
null
null
SLAB: Efficient Transformers with Simplified Linear Attention and Progressive Re-parameterized Batch Normalization
['Jialong Guo', 'Xinghao Chen', 'Yehui Tang', 'Yunhe Wang']
2,024
International Conference on Machine Learning
14
46
['Computer Science']
2,405.11724
Token-wise Influential Training Data Retrieval for Large Language Models
['Huawei Lin', 'Jikai Long', 'Zhaozhuo Xu', 'Weijie Zhao']
['cs.CL', 'cs.AI', 'cs.CR', 'cs.IR']
Given a Large Language Model (LLM) generation, how can we identify which training data led to this generation? In this paper, we proposed RapidIn, a scalable framework adapting to LLMs for estimating the influence of each training data. The proposed framework consists of two stages: caching and retrieval. First, we com...
2024-05-20T01:57:34Z
Accepted to ACL 2024. Keywords: Influence Function, Influence Estimation, Training Data Attribution
null
null
null
null
null
null
null
null
null
2,405.11788
TinyLLaVA Factory: A Modularized Codebase for Small-scale Large Multimodal Models
['Junlong Jia', 'Ying Hu', 'Xi Weng', 'Yiming Shi', 'Miao Li', 'Xingjian Zhang', 'Baichuan Zhou', 'Ziyu Liu', 'Jie Luo', 'Lei Huang', 'Ji Wu']
['cs.LG']
We present TinyLLaVA Factory, an open-source modular codebase for small-scale large multimodal models (LMMs) with a focus on simplicity of code implementations, extensibility of new features, and reproducibility of training results. Following the design philosophy of the factory pattern in software engineering, TinyLLa...
2024-05-20T05:11:02Z
Our codebase is made public at https://github.com/TinyLLaVA/TinyLLaVA_Factory with documentation available at https://tinyllava-factory.readthedocs.io/en/latest/
null
null
null
null
null
null
null
null
null
2,405.11794
ViViD: Video Virtual Try-on using Diffusion Models
['Zixun Fang', 'Wei Zhai', 'Aimin Su', 'Hongliang Song', 'Kai Zhu', 'Mao Wang', 'Yu Chen', 'Zhiheng Liu', 'Yang Cao', 'Zheng-Jun Zha']
['cs.CV']
Video virtual try-on aims to transfer a clothing item onto the video of a target person. Directly applying the technique of image-based try-on to the video domain in a frame-wise manner will cause temporal-inconsistent outcomes while previous video-based try-on solutions can only generate low visual quality and blurrin...
2024-05-20T05:28:22Z
null
null
null
null
null
null
null
null
null
null
2,405.11831
SSAMBA: Self-Supervised Audio Representation Learning with Mamba State Space Model
['Siavash Shams', 'Sukru Samet Dindar', 'Xilin Jiang', 'Nima Mesgarani']
['eess.AS', 'cs.LG']
Transformers have revolutionized deep learning across various tasks, including audio representation learning, due to their powerful modeling capabilities. However, they often suffer from quadratic complexity in both GPU memory usage and computational inference time, affecting their efficiency. Recently, state space mod...
2024-05-20T06:58:47Z
Code at https://github.com/SiavashShams/ssamba
2024 IEEE Spoken Language Technology Workshop (SLT), Macao, pp. 1053-1059
10.1109/SLT61566.2024.10832304
null
null
null
null
null
null
null
2,405.1185
Rethinking Overlooked Aspects in Vision-Language Models
['Yuan Liu', 'Le Tian', 'Xiao Zhou', 'Jie Zhou']
['cs.CV']
Recent advancements in large vision-language models (LVLMs), such as GPT4-V and LLaVA, have been substantial. LLaVA's modular architecture, in particular, offers a blend of simplicity and efficiency. Recent works mainly focus on introducing more pre-training and instruction tuning data to improve model's performance. T...
2024-05-20T07:53:41Z
null
null
null
null
null
null
null
null
null
null
2,405.12107
Imp: Highly Capable Large Multimodal Models for Mobile Devices
['Zhenwei Shao', 'Zhou Yu', 'Jun Yu', 'Xuecheng Ouyang', 'Lihao Zheng', 'Zhenbiao Gai', 'Mingyang Wang', 'Jiajun Ding']
['cs.CV', 'cs.CL']
By harnessing the capabilities of large language models (LLMs), recent large multimodal models (LMMs) have shown remarkable versatility in open-world multimodal understanding. Nevertheless, they are usually parameter-heavy and computation-intensive, thus hindering their applicability in resource-constrained scenarios. ...
2024-05-20T15:23:19Z
fix some typos and correct a few number in the tables
null
null
null
null
null
null
null
null
null
2,405.12255
Mammo-CLIP: A Vision Language Foundation Model to Enhance Data Efficiency and Robustness in Mammography
['Shantanu Ghosh', 'Clare B. Poynton', 'Shyam Visweswaran', 'Kayhan Batmanghelich']
['eess.IV', 'cs.CV']
The lack of large and diverse training data on Computer-Aided Diagnosis (CAD) in breast cancer detection has been one of the concerns that impedes the adoption of the system. Recently, pre-training with large-scale image text datasets via Vision-Language models (VLM) (\eg CLIP) partially addresses the issue of robustne...
2024-05-20T08:27:39Z
MICCAI 2024, early accept, top 11%
null
null
null
null
null
null
null
null
null
2,405.12399
Diffusion for World Modeling: Visual Details Matter in Atari
['Eloi Alonso', 'Adam Jelley', 'Vincent Micheli', 'Anssi Kanervisto', 'Amos Storkey', 'Tim Pearce', 'François Fleuret']
['cs.LG', 'cs.AI', 'cs.CV']
World models constitute a promising approach for training reinforcement learning agents in a safe and sample-efficient manner. Recent world models predominantly operate on sequences of discrete latent variables to model environment dynamics. However, this compression into a compact discrete representation may ignore vi...
2024-05-20T22:51:05Z
NeurIPS 2024 (Spotlight)
null
null
Diffusion for World Modeling: Visual Details Matter in Atari
['Eloi Alonso', 'Adam Jelley', 'Vincent Micheli', 'A. Kanervisto', 'A. Storkey', 'Tim Pearce', 'Franccois Fleuret']
2,024
Neural Information Processing Systems
69
89
['Computer Science']
2,405.12612
Tagengo: A Multilingual Chat Dataset
['Peter Devine']
['cs.CL', 'cs.AI', 'cs.LG']
Open source large language models (LLMs) have shown great improvements in recent times. However, many of these models are focused solely on popular spoken languages. We present a high quality dataset of more than 70k prompt-response pairs in 74 languages which consist of human generated prompts and synthetic responses....
2024-05-21T09:06:36Z
null
null
null
null
null
null
null
null
null
null
2,405.12739
SPO: Multi-Dimensional Preference Sequential Alignment With Implicit Reward Modeling
['Xingzhou Lou', 'Junge Zhang', 'Jian Xie', 'Lifeng Liu', 'Dong Yan', 'Kaiqi Huang']
['cs.LG']
Human preference alignment is critical in building powerful and reliable large language models (LLMs). However, current methods either ignore the multi-dimensionality of human preferences (e.g. helpfulness and harmlessness) or struggle with the complexity of managing multiple reward models. To address these issues, we ...
2024-05-21T12:47:17Z
null
null
null
null
null
null
null
null
null
null
2,405.1297
Face Adapter for Pre-Trained Diffusion Models with Fine-Grained ID and Attribute Control
['Yue Han', 'Junwei Zhu', 'Keke He', 'Xu Chen', 'Yanhao Ge', 'Wei Li', 'Xiangtai Li', 'Jiangning Zhang', 'Chengjie Wang', 'Yong Liu']
['cs.CV']
Current face reenactment and swapping methods mainly rely on GAN frameworks, but recent focus has shifted to pre-trained diffusion models for their superior generation capabilities. However, training these models is resource-intensive, and the results have not yet achieved satisfactory performance levels. To address th...
2024-05-21T17:50:12Z
Accepted to ECCV2024; Project Page: https://faceadapter.github.io/face-adapter.github.io/
null
null
null
null
null
null
null
null
null
2,405.12972
Accelerating Resonance Searches via Signature-Oriented Pre-training
['Congqiao Li', 'Antonios Agapitos', 'Jovin Drews', 'Javier Duarte', 'Dawei Fu', 'Leyun Gao', 'Raghav Kansal', 'Gregor Kasieczka', 'Louis Moureaux', 'Huilin Qu', 'Cristina Mantilla Suarez', 'Qiang Li']
['hep-ph', 'hep-ex', 'physics.data-an']
The search for heavy resonances beyond the Standard Model (BSM) is a key objective at the LHC. While the recent use of advanced deep neural networks for boosted-jet tagging significantly enhances the sensitivity of dedicated searches, it is limited to specific final states, leaving vast potential BSM phase space undere...
2024-05-21T17:54:53Z
14 pages, 5 figures
null
null
null
null
null
null
null
null
null
2,405.1301
UCCIX: Irish-eXcellence Large Language Model
['Khanh-Tung Tran', "Barry O'Sullivan", 'Hoang D. Nguyen']
['cs.CL', 'cs.AI']
The development of Large Language Models (LLMs) has predominantly focused on high-resource languages, leaving extremely low-resource languages like Irish with limited representation. This work presents UCCIX, a pioneering effort on the development of an open-source Irish-based LLM. We propose a novel framework for cont...
2024-05-13T13:19:27Z
null
null
null
null
null
null
null
null
null
null
2,405.13053
MeteoRA: Multiple-tasks Embedded LoRA for Large Language Models
['Jingwei Xu', 'Junyu Lai', 'Yunpeng Huang']
['cs.CL', 'cs.AI', 'I.2.7']
The pretrain+fine-tune paradigm is foundational for deploying large language models (LLMs) across various downstream applications. Within this framework, Low-Rank Adaptation (LoRA) stands out for its parameter-efficient fine-tuning (PEFT), producing numerous reusable task-specific LoRA adapters. However, this approach ...
2024-05-19T20:46:07Z
26 pages
null
null
null
null
null
null
null
null
null
2,405.13144
LLMs for Mathematical Modeling: Towards Bridging the Gap between Natural and Mathematical Languages
['Xuhan Huang', 'Qingning Shen', 'Yan Hu', 'Anningzhe Gao', 'Benyou Wang']
['cs.AI', 'cs.CL']
Large Language Models (LLMs) have demonstrated strong performance across various natural language processing tasks, yet their proficiency in mathematical reasoning remains a key challenge. Addressing the gap between natural and mathematical language requires advanced reasoning capabilities, approaching those of Artific...
2024-05-21T18:29:54Z
Findings of NAACL2025. Project: https://github.com/FreedomIntelligence/Mamo
null
null
LLMs for Mathematical Modeling: Towards Bridging the Gap between Natural and Mathematical Languages
['Xuhan Huang', 'Qingning Shen', 'Yan Hu', 'Anningzhe Gao', 'Benyou Wang']
2,024
North American Chapter of the Association for Computational Linguistics
3
36
['Computer Science']
2,405.13226
Dataset Decomposition: Faster LLM Training with Variable Sequence Length Curriculum
['Hadi Pouransari', 'Chun-Liang Li', 'Jen-Hao Rick Chang', 'Pavan Kumar Anasosalu Vasu', 'Cem Koc', 'Vaishaal Shankar', 'Oncel Tuzel']
['cs.CL', 'cs.LG']
Large language models (LLMs) are commonly trained on datasets consisting of fixed-length token sequences. These datasets are created by randomly concatenating documents of various lengths and then chunking them into sequences of a predetermined target length (concat-and-chunk). Recent attention implementations mask cro...
2024-05-21T22:26:01Z
NeurIPS 2024
null
null
Dataset Decomposition: Faster LLM Training with Variable Sequence Length Curriculum
['Hadi Pouransari', 'Chun-Liang Li', 'Jen-Hao Rick Chang', 'Pavan Kumar Anasosalu Vasu', 'Cem Koc', 'Vaishaal Shankar', 'Oncel Tuzel']
2,024
Neural Information Processing Systems
11
66
['Computer Science']
2,405.13382
VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding
['Yongxin Guo', 'Jingyu Liu', 'Mingda Li', 'Dingxin Cheng', 'Xiaoying Tang', 'Dianbo Sui', 'Qingbin Liu', 'Xi Chen', 'Kevin Zhao']
['cs.CV']
Video Temporal Grounding (VTG) strives to accurately pinpoint event timestamps in a specific video using linguistic queries, significantly impacting downstream tasks like video browsing and editing. Unlike traditional task-specific models, Video Large Language Models (video LLMs) can handle multiple tasks concurrently ...
2024-05-22T06:31:42Z
AAAI 2025
null
null
null
null
null
null
null
null
null
2,405.13386
360Zhinao Technical Report
['360Zhinao Team']
['cs.CL', 'cs.AI']
We present 360Zhinao models with 7B parameter size and context lengths spanning 4K, 32K and 360K, all available at https://github.com/Qihoo360/360zhinao. For rapid development in pretraining, we establish a stable and sensitive ablation environment to evaluate and compare experiment runs with minimal model size. Under ...
2024-05-22T06:45:38Z
360Zhinao technical report. Github: https://github.com/Qihoo360/360zhinao
null
null
null
null
null
null
null
null
null
2,405.13396
Fine-tuned In-Context Learning Transformers are Excellent Tabular Data Classifiers
['Felix den Breejen', 'Sangmin Bae', 'Stephen Cha', 'Se-Young Yun']
['cs.LG', 'stat.ML']
The recently introduced TabPFN pretrains an In-Context Learning (ICL) transformer on synthetic data to perform tabular data classification. In this work, we extend TabPFN to the fine-tuning setting, resulting in a significant performance boost. We also discover that fine-tuning enables ICL-transformers to create comple...
2024-05-22T07:13:55Z
null
null
null
null
null
null
null
null
null
null
2,405.13448
Distilling Instruction-following Abilities of Large Language Models with Task-aware Curriculum Planning
['Yuanhao Yue', 'Chengyu Wang', 'Jun Huang', 'Peng Wang']
['cs.CL']
Instruction tuning aims to align large language models (LLMs) with open-domain instructions and human-preferred responses. While several studies have explored autonomous approaches to distilling and annotating instructions from powerful proprietary LLMs, such as ChatGPT, they often neglect the impact of the distributio...
2024-05-22T08:38:26Z
emnlp 2024 findings
null
null
Distilling Instruction-following Abilities of Large Language Models with Task-aware Curriculum Planning
['Yuanhao Yue', 'Chengyu Wang', 'Jun Huang', 'Peng Wang']
2,024
Conference on Empirical Methods in Natural Language Processing
9
52
['Computer Science']
2,405.13636
Audio Mamba: Pretrained Audio State Space Model For Audio Tagging
['Jiaju Lin', 'Haoxuan Hu']
['cs.SD', 'cs.AI', 'eess.AS']
Audio tagging is an important task of mapping audio samples to their corresponding categories. Recently endeavours that exploit transformer models in this field have achieved great success. However, the quadratic self-attention cost limits the scaling of audio transformer models and further constrains the development o...
2024-05-22T13:35:56Z
null
null
null
Audio Mamba: Pretrained Audio State Space Model For Audio Tagging
['Jiaju Lin', 'Haoxuan Hu']
2,024
arXiv.org
9
21
['Computer Science', 'Engineering']
2,405.13637
Curriculum Direct Preference Optimization for Diffusion and Consistency Models
['Florinel-Alin Croitoru', 'Vlad Hondru', 'Radu Tudor Ionescu', 'Nicu Sebe', 'Mubarak Shah']
['cs.CV', 'cs.AI', 'cs.LG']
Direct Preference Optimization (DPO) has been proposed as an effective and efficient alternative to reinforcement learning from human feedback (RLHF). In this paper, we propose a novel and enhanced version of DPO based on curriculum learning for text-to-image generation. Our method is divided into two training stages. ...
2024-05-22T13:36:48Z
Accepted at CVPR 2025
null
null
null
null
null
null
null
null
null
2,405.138
Dense Connector for MLLMs
['Huanjin Yao', 'Wenhao Wu', 'Taojiannan Yang', 'YuXin Song', 'Mengxi Zhang', 'Haocheng Feng', 'Yifan Sun', 'Zhiheng Li', 'Wanli Ouyang', 'Jingdong Wang']
['cs.CV', 'cs.AI']
Do we fully leverage the potential of visual encoder in Multimodal Large Language Models (MLLMs)? The recent outstanding performance of MLLMs in multimodal understanding has garnered broad attention from both academia and industry. In the current MLLM rat race, the focus seems to be predominantly on the linguistic side...
2024-05-22T16:25:03Z
27 pages, NeurIPS 2024
NeurIPS 2024
null
null
null
null
null
null
null
null
2,405.13816
Getting More from Less: Large Language Models are Good Spontaneous Multilingual Learners
['Shimao Zhang', 'Changjiang Gao', 'Wenhao Zhu', 'Jiajun Chen', 'Xin Huang', 'Xue Han', 'Junlan Feng', 'Chao Deng', 'Shujian Huang']
['cs.CL']
Recently, Large Language Models (LLMs) have shown impressive language capabilities. While most of the existing LLMs have very unbalanced performance across different languages, multilingual alignment based on translation parallel data is an effective method to enhance the LLMs' multilingual capabilities. In this work, ...
2024-05-22T16:46:19Z
null
null
null
null
null
null
null
null
null
null
2,405.13865
ReVideo: Remake a Video with Motion and Content Control
['Chong Mou', 'Mingdeng Cao', 'Xintao Wang', 'Zhaoyang Zhang', 'Ying Shan', 'Jian Zhang']
['cs.CV']
Despite significant advancements in video generation and editing using diffusion models, achieving accurate and localized video editing remains a substantial challenge. Additionally, most existing video editing methods primarily focus on altering visual content, with limited research dedicated to motion editing. In thi...
2024-05-22T17:46:08Z
null
null
null
ReVideo: Remake a Video with Motion and Content Control
['Chong Mou', 'Mingdeng Cao', 'Xintao Wang', 'Zhaoyang Zhang', 'Ying Shan', 'Jian Zhang']
2,024
Neural Information Processing Systems
31
0
['Computer Science']
2,405.13929
Vikhr: The Family of Open-Source Instruction-Tuned Large Language Models for Russian
['Aleksandr Nikolich', 'Konstantin Korolev', 'Sergei Bratchikov', 'Igor Kiselev', 'Artem Shelmanov']
['cs.CL', 'cs.AI']
There has been a surge in the development of various Large Language Models (LLMs). However, text generation for languages other than English often faces significant challenges, including poor generation quality and reduced computational performance due to the disproportionate representation of tokens in the model's voc...
2024-05-22T18:58:58Z
null
null
null
Vikhr: The Family of Open-Source Instruction-Tuned Large Language Models for Russian
['Aleksandr Nikolich', 'Konstantin Korolev', 'Artem Shelmanov']
2,024
arXiv.org
11
33
['Computer Science']
2,405.14129
AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability
['Fei Zhao', 'Taotian Pang', 'Chunhui Li', 'Zhen Wu', 'Junjie Guo', 'Shangyu Xing', 'Xinyu Dai']
['cs.CL', 'cs.AI', 'cs.CV']
Multimodal Large Language Models (MLLMs) are widely regarded as crucial in the exploration of Artificial General Intelligence (AGI). The core of MLLMs lies in their capability to achieve cross-modal alignment. To attain this goal, current MLLMs typically follow a two-phase training paradigm: the pre-training phase and ...
2024-05-23T03:07:56Z
null
null
null
AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability
['Fei Zhao', 'Taotian Pang', 'Chunhui Li', 'Zhen Wu', 'Junjie Guo', 'Shangyu Xing', 'Xinyu Dai']
2,024
arXiv.org
7
46
['Computer Science']
2,405.14141
ViHateT5: Enhancing Hate Speech Detection in Vietnamese With A Unified Text-to-Text Transformer Model
['Luan Thanh Nguyen']
['cs.CL']
Recent advancements in hate speech detection (HSD) in Vietnamese have made significant progress, primarily attributed to the emergence of transformer-based pre-trained language models, particularly those built on the BERT architecture. However, the necessity for specialized fine-tuned models has resulted in the complex...
2024-05-23T03:31:50Z
Accepted at ACL'2024 (Findings)
null
null
null
null
null
null
null
null
null
2,405.14205
Agent Planning with World Knowledge Model
['Shuofei Qiao', 'Runnan Fang', 'Ningyu Zhang', 'Yuqi Zhu', 'Xiang Chen', 'Shumin Deng', 'Yong Jiang', 'Pengjun Xie', 'Fei Huang', 'Huajun Chen']
['cs.CL', 'cs.AI', 'cs.CV', 'cs.LG', 'cs.MA']
Recent endeavors towards directly using large language models (LLMs) as agent models to execute interactive planning tasks have shown commendable results. Despite their achievements, however, they still struggle with brainless trial-and-error in global planning and generating hallucinatory actions in local planning due...
2024-05-23T06:03:19Z
NeurIPS 2024
null
null
null
null
null
null
null
null
null
2,405.14295
Focus Anywhere for Fine-grained Multi-page Document Understanding
['Chenglong Liu', 'Haoran Wei', 'Jinyue Chen', 'Lingyu Kong', 'Zheng Ge', 'Zining Zhu', 'Liang Zhao', 'Jianjian Sun', 'Chunrui Han', 'Xiangyu Zhang']
['cs.CV']
Modern LVLMs still struggle to achieve fine-grained document understanding, such as OCR/translation/caption for regions of interest to the user, tasks that require the context of the entire page, or even multiple pages. Accordingly, this paper proposes Fox, an effective pipeline, hybrid data, and tuning strategy, that ...
2024-05-23T08:15:49Z
null
null
null
Focus Anywhere for Fine-grained Multi-page Document Understanding
['Chenglong Liu', 'Haoran Wei', 'Jinyue Chen', 'Lingyu Kong', 'Zheng Ge', 'Zining Zhu', 'Liang Zhao', 'Jian‐Yuan Sun', 'Chunrui Han', 'Xiangyu Zhang']
2,024
arXiv.org
25
44
['Computer Science']
2,405.14297
Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models
['Yongxin Guo', 'Zhenglin Cheng', 'Xiaoying Tang', 'Zhaopeng Tu', 'Tao Lin']
['cs.LG', 'cs.AI']
The Sparse Mixture of Experts (SMoE) has been widely employed to enhance the efficiency of training and inference for Transformer-based foundational models, yielding promising results.However, the performance of SMoE heavily depends on the choice of hyper-parameters, such as the number of experts and the number of expe...
2024-05-23T08:18:30Z
ICLR 2025
null
null
Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models
['Yongxin Guo', 'Zhenglin Cheng', 'Xiaoying Tang', 'Tao Lin']
2,024
International Conference on Learning Representations
9
69
['Computer Science']
2,405.14333
DeepSeek-Prover: Advancing Theorem Proving in LLMs through Large-Scale Synthetic Data
['Huajian Xin', 'Daya Guo', 'Zhihong Shao', 'Zhizhou Ren', 'Qihao Zhu', 'Bo Liu', 'Chong Ruan', 'Wenda Li', 'Xiaodan Liang']
['cs.AI']
Proof assistants like Lean have revolutionized mathematical proof verification, ensuring high accuracy and reliability. Although large language models (LLMs) show promise in mathematical reasoning, their advancement in formal theorem proving is hindered by a lack of training data. To address this issue, we introduce an...
2024-05-23T09:03:42Z
null
null
null
DeepSeek-Prover: Advancing Theorem Proving in LLMs through Large-Scale Synthetic Data
['Huajian Xin', 'Daya Guo', 'Zhihong Shao', 'Z. Ren', 'Qihao Zhu', 'Bo Liu (Benjamin Liu)', 'C. Ruan', 'Wenda Li', 'Xiaodan Liang']
2,024
arXiv.org
91
35
['Computer Science']
2,405.14365
JiuZhang3.0: Efficiently Improving Mathematical Reasoning by Training Small Data Synthesis Models
['Kun Zhou', 'Beichen Zhang', 'Jiapeng Wang', 'Zhipeng Chen', 'Wayne Xin Zhao', 'Jing Sha', 'Zhichao Sheng', 'Shijin Wang', 'Ji-Rong Wen']
['cs.CL', 'cs.AI']
Mathematical reasoning is an important capability of large language models~(LLMs) for real-world applications. To enhance this capability, existing work either collects large-scale math-related texts for pre-training, or relies on stronger LLMs (\eg GPT-4) to synthesize massive math problems. Both types of work general...
2024-05-23T09:43:19Z
28 pages, SOTA math LLM using Well-trained Data Synthesis LLM
null
null
null
null
null
null
null
null
null
2,405.14385
Emotion Identification for French in Written Texts: Considering their Modes of Expression as a Step Towards Text Complexity Analysis
['Aline Étienne', 'Delphine Battistelli', 'Gwénolé Lecorvé']
['cs.CL', 'cs.AI']
The objective of this paper is to predict (A) whether a sentence in a written text expresses an emotion, (B) the mode(s) in which it is expressed, (C) whether it is basic or complex, and (D) its emotional category. One of our major contributions, through a dataset and a model, is to integrate the fact that an emotion...
2024-05-23T10:02:13Z
17 pages, 12 figures, submitted to ACL 2024 WASSA workshop
null
null
null
null
null
null
null
null
null
2,405.14438
LoRA-Ensemble: Efficient Uncertainty Modelling for Self-Attention Networks
['Dominik J. Mühlematter', 'Michelle Halbheer', 'Alexander Becker', 'Dominik Narnhofer', 'Helge Aasen', 'Konrad Schindler', 'Mehmet Ozgur Turkoglu']
['cs.LG']
Numerous real-world decisions rely on machine learning algorithms and require calibrated uncertainty estimates. However, modern methods often yield overconfident, uncalibrated predictions. The dominant approach to quantifying the uncertainty inherent in the model is to train an ensemble of separate predictors and measu...
2024-05-23T11:10:32Z
under review
null
null
LoRA-Ensemble: Efficient Uncertainty Modelling for Self-attention Networks
['Michelle Halbheer', 'Dominik J. Mühlematter', 'Alexander Becker', 'Dominik Narnhofer', 'Helge Aasen', 'Konrad Schindler', 'Mehmet Ozgur Turkoglu']
2,024
arXiv.org
3
65
['Computer Science']
2,405.14449
Adversarial Schrödinger Bridge Matching
['Nikita Gushchin', 'Daniil Selikhanovych', 'Sergei Kholkin', 'Evgeny Burnaev', 'Alexander Korotin']
['cs.LG']
The Schr\"odinger Bridge (SB) problem offers a powerful framework for combining optimal transport and diffusion models. A promising recent approach to solve the SB problem is the Iterative Markovian Fitting (IMF) procedure, which alternates between Markovian and reciprocal projections of continuous-time stochastic proc...
2024-05-23T11:29:33Z
null
null
null
Adversarial Schrödinger Bridge Matching
['Nikita Gushchin', 'Daniil Selikhanovych', 'Sergei Kholkin', 'Evgeny Burnaev', 'Alexander Korotin']
2,024
Neural Information Processing Systems
3
57
['Computer Science']
2,405.14458
YOLOv10: Real-Time End-to-End Object Detection
['Ao Wang', 'Hui Chen', 'Lihao Liu', 'Kai Chen', 'Zijia Lin', 'Jungong Han', 'Guiguang Ding']
['cs.CV']
Over the past years, YOLOs have emerged as the predominant paradigm in the field of real-time object detection owing to their effective balance between computational cost and detection performance. Researchers have explored the architectural designs, optimization objectives, data augmentation strategies, and others for...
2024-05-23T11:44:29Z
Code: https://github.com/THU-MIG/yolov10; NeurIPS 2024 Camera-ready Version
null
null
null
null
null
null
null
null
null
2,405.14488
MoGU: A Framework for Enhancing Safety of Open-Sourced LLMs While Preserving Their Usability
['Yanrui Du', 'Sendong Zhao', 'Danyang Zhao', 'Ming Ma', 'Yuhan Chen', 'Liangyu Huo', 'Qing Yang', 'Dongliang Xu', 'Bing Qin']
['cs.CL']
Large Language Models (LLMs) are increasingly deployed in various applications. As their usage grows, concerns regarding their safety are rising, especially in maintaining harmless responses when faced with malicious instructions. Many defense strategies have been developed to enhance the safety of LLMs. However, our r...
2024-05-23T12:19:59Z
null
null
null
MoGU: A Framework for Enhancing Safety of Open-Sourced LLMs While Preserving Their Usability
['Yanrui Du', 'Sendong Zhao', 'Danyang Zhao', 'Ming Ma', 'Yuhan Chen', 'Liangyu Huo', 'Qing Yang', 'Dongliang Xu', 'Bing Qin']
2,024
arXiv.org
11
44
['Computer Science']
2,405.14573
AndroidWorld: A Dynamic Benchmarking Environment for Autonomous Agents
['Christopher Rawles', 'Sarah Clinckemaillie', 'Yifan Chang', 'Jonathan Waltz', 'Gabrielle Lau', 'Marybeth Fair', 'Alice Li', 'William Bishop', 'Wei Li', 'Folawiyo Campbell-Ajala', 'Daniel Toyama', 'Robert Berry', 'Divya Tyamagundlu', 'Timothy Lillicrap', 'Oriana Riva']
['cs.AI', 'cs.LG']
Autonomous agents that execute human tasks by controlling computers can enhance human productivity and application accessibility. However, progress in this field will be driven by realistic and reproducible benchmarks. We present AndroidWorld, a fully functional Android environment that provides reward signals for 116 ...
2024-05-23T13:48:54Z
null
null
null
null
null
null
null
null
null
null
2,405.14654
Efficient Medical Question Answering with Knowledge-Augmented Question Generation
['Julien Khlaut', 'Corentin Dancette', 'Elodie Ferreres', 'Alaedine Bennani', 'Paul Hérent', 'Pierre Manceron']
['cs.CL', 'cs.AI']
In the expanding field of language model applications, medical knowledge representation remains a significant challenge due to the specialized nature of the domain. Large language models, such as GPT-4, obtain reasonable scores on medical question answering tasks, but smaller models are far behind. In this work, we int...
2024-05-23T14:53:52Z
Accepted at the Clinical Natural Language Processing Workshop, NAACL 2024
null
null
null
null
null
null
null
null
null
2,405.14734
SimPO: Simple Preference Optimization with a Reference-Free Reward
['Yu Meng', 'Mengzhou Xia', 'Danqi Chen']
['cs.CL', 'cs.LG']
Direct Preference Optimization (DPO) is a widely used offline preference optimization algorithm that reparameterizes reward functions in reinforcement learning from human feedback (RLHF) to enhance simplicity and training stability. In this work, we propose SimPO, a simpler yet more effective approach. The effectivenes...
2024-05-23T16:01:46Z
NeurIPS 2024. Code & models: https://github.com/princeton-nlp/SimPO. v3 updates: Gemma 2 results (Appendix J); more discussions about length normalization (Section 2.2) and KL regularization (Section 2.3)
null
null
SimPO: Simple Preference Optimization with a Reference-Free Reward
['Yu Meng', 'Mengzhou Xia', 'Danqi Chen']
2,024
Neural Information Processing Systems
494
99
['Computer Science']
2,405.14753
A Transformer-Based Approach for Smart Invocation of Automatic Code Completion
['Aral de Moor', 'Arie van Deursen', 'Maliheh Izadi']
['cs.SE', 'cs.AI', 'cs.HC', 'cs.LG']
Transformer-based language models are highly effective for code completion, with much research dedicated to enhancing the content of these completions. Despite their effectiveness, these models come with high operational costs and can be intrusive, especially when they suggest too often and interrupt developers who are...
2024-05-23T16:19:32Z
10 pages, 3 figures; Accepted at FSE AIWARE'24
null
10.1145/3664646.3664760
null
null
null
null
null
null
null
2,405.14793
SEA-RAFT: Simple, Efficient, Accurate RAFT for Optical Flow
['Yihan Wang', 'Lahav Lipson', 'Jia Deng']
['cs.CV']
We introduce SEA-RAFT, a more simple, efficient, and accurate RAFT for optical flow. Compared with RAFT, SEA-RAFT is trained with a new loss (mixture of Laplace). It directly regresses an initial flow for faster convergence in iterative refinements and introduces rigid-motion pre-training to improve generalization. SEA...
2024-05-23T17:04:04Z
null
null
null
null
null
null
null
null
null
null
2,405.14832
Direct3D: Scalable Image-to-3D Generation via 3D Latent Diffusion Transformer
['Shuang Wu', 'Youtian Lin', 'Feihu Zhang', 'Yifei Zeng', 'Jingxi Xu', 'Philip Torr', 'Xun Cao', 'Yao Yao']
['cs.CV']
Generating high-quality 3D assets from text and images has long been challenging, primarily due to the absence of scalable 3D representations capable of capturing intricate geometry distributions. In this work, we introduce Direct3D, a native 3D generative model scalable to in-the-wild input images, without requiring a...
2024-05-23T17:49:37Z
null
null
null
null
null
null
null
null
null
null
2,405.14839
A Textbook Remedy for Domain Shifts: Knowledge Priors for Medical Image Analysis
['Yue Yang', 'Mona Gandhi', 'Yufei Wang', 'Yifan Wu', 'Michael S. Yao', 'Chris Callison-Burch', 'James C. Gee', 'Mark Yatskar']
['cs.CV', 'cs.CL']
While deep networks have achieved broad success in analyzing natural images, when applied to medical scans, they often fail in unexcepted situations. We investigate this challenge and focus on model sensitivity to domain shifts, such as data sampled from different hospitals or data confounded by demographic variables s...
2024-05-23T17:55:02Z
Published in NeurIPS 2024 (Spotlight), project page: https://yueyang1996.github.io/knobo/
null
null
A Textbook Remedy for Domain Shifts: Knowledge Priors for Medical Image Analysis
['Yue Yang', 'Mona Gandhi', 'Yufei Wang', 'Yifan Wu', 'Michael S. Yao', 'Christopher Callison-Burch', 'James C. Gee', 'Mark Yatskar']
2,024
Neural Information Processing Systems
4
97
['Computer Science', 'Medicine']
2,405.14852
PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression
['Vladimir Malinovskii', 'Denis Mazur', 'Ivan Ilin', 'Denis Kuznedelev', 'Konstantin Burlachenko', 'Kai Yi', 'Dan Alistarh', 'Peter Richtarik']
['cs.LG']
There has been significant interest in "extreme" compression of large language models (LLMs), i.e., to 1-2 bits per parameter, which allows such models to be executed efficiently on resource-constrained devices. Existing work focused on improved one-shot quantization techniques and weight representations; yet, purely p...
2024-05-23T17:57:04Z
Preprint
null
null
null
null
null
null
null
null
null
2,405.14854
TerDiT: Ternary Diffusion Models with Transformers
['Xudong Lu', 'Aojun Zhou', 'Ziyi Lin', 'Qi Liu', 'Yuhui Xu', 'Renrui Zhang', 'Xue Yang', 'Junchi Yan', 'Peng Gao', 'Hongsheng Li']
['cs.CV', 'cs.LG']
Recent developments in large-scale pre-trained text-to-image diffusion models have significantly improved the generation of high-fidelity images, particularly with the emergence of diffusion transformer models (DiTs). Among diffusion models, diffusion transformers have demonstrated superior image-generation capabilitie...
2024-05-23T17:57:24Z
null
null
null
null
null
null
null
null
null
null
2,405.14867
Improved Distribution Matching Distillation for Fast Image Synthesis
['Tianwei Yin', 'Michaël Gharbi', 'Taesung Park', 'Richard Zhang', 'Eli Shechtman', 'Fredo Durand', 'William T. Freeman']
['cs.CV']
Recent approaches have shown promises distilling diffusion models into efficient one-step generators. Among them, Distribution Matching Distillation (DMD) produces one-step generators that match their teacher in distribution, without enforcing a one-to-one correspondence with the sampling trajectories of their teachers...
2024-05-23T17:59:49Z
Code, model, and dataset are available at https://tianweiy.github.io/dmd2
null
null
null
null
null
null
null
null
null
2,405.14905
Structural Entities Extraction and Patient Indications Incorporation for Chest X-ray Report Generation
['Kang Liu', 'Zhuoqi Ma', 'Xiaolu Kang', 'Zhusi Zhong', 'Zhicheng Jiao', 'Grayson Baird', 'Harrison Bai', 'Qiguang Miao']
['eess.IV', 'cs.AI', 'cs.CL']
The automated generation of imaging reports proves invaluable in alleviating the workload of radiologists. A clinically applicable reports generation algorithm should demonstrate its effectiveness in producing reports that accurately describe radiology findings and attend to patient-specific indications. In this paper,...
2024-05-23T01:29:47Z
The code is available at https://github.com/mk-runner/SEI-Temp or https://github.com/mk-runner/SEI
null
null
null
null
null
null
null
null
null